949 resultados para Adaptive biasing force method
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
This thesis investigated the potential use of Linear Predictive Coding in speech communication applications. A Modified Block Adaptive Predictive Coder is developed, which reduces the computational burden and complexity without sacrificing the speech quality, as compared to the conventional adaptive predictive coding (APC) system. For this, changes in the evaluation methods have been evolved. This method is as different from the usual APC system in that the difference between the true and the predicted value is not transmitted. This allows the replacement of the high order predictor in the transmitter section of a predictive coding system, by a simple delay unit, which makes the transmitter quite simple. Also, the block length used in the processing of the speech signal is adjusted relative to the pitch period of the signal being processed rather than choosing a constant length as hitherto done by other researchers. The efficiency of the newly proposed coder has been supported with results of computer simulation using real speech data. Three methods for voiced/unvoiced/silent/transition classification have been presented. The first one is based on energy, zerocrossing rate and the periodicity of the waveform. The second method uses normalised correlation coefficient as the main parameter, while the third method utilizes a pitch-dependent correlation factor. The third algorithm which gives the minimum error probability has been chosen in a later chapter to design the modified coder The thesis also presents a comparazive study beh-cm the autocorrelation and the covariance methods used in the evaluaiicn of the predictor parameters. It has been proved that the azztocorrelation method is superior to the covariance method with respect to the filter stabf-it)‘ and also in an SNR sense, though the increase in gain is only small. The Modified Block Adaptive Coder applies a switching from pitch precitzion to spectrum prediction when the speech segment changes from a voiced or transition region to an unvoiced region. The experiments cont;-:ted in coding, transmission and simulation, used speech samples from .\£=_‘ajr2_1a:r1 and English phrases. Proposal for a speaker reecgnifion syste: and a phoneme identification system has also been outlized towards the end of the thesis.
Resumo:
The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL) are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
We deal with the numerical solution of heat conduction problems featuring steep gradients. In order to solve the associated partial differential equation a finite volume technique is used and unstructured grids are employed. A discrete maximum principle for triangulations of a Delaunay type is developed. To capture thin boundary layers incorporating steep gradients an anisotropic mesh adaptation technique is implemented. Computational tests are performed for an academic problem where the exact solution is known as well as for a real world problem of a computer simulation of the thermoregulation of premature infants.
Resumo:
This work focuses on the analysis of the influence of environment on the relative biological effectiveness (RBE) of carbon ions on molecular level. Due to the high relevance of RBE for medical applications, such as tumor therapy, and radiation protection in space, DNA damages have been investigated in order to understand the biological efficiency of heavy ion radiation. The contribution of this study to the radiobiology research consists in the analysis of plasmid DNA damages induced by carbon ion radiation in biochemical buffer environments, as well as in the calculation of the RBE of carbon ions on DNA level by mean of scanning force microscopy (SFM). In order to study the DNA damages, besides the common electrophoresis method, a new approach has been developed by using SFM. The latter method allows direct visualisation and measurement of individual DNA fragments with an accuracy of several nanometres. In addition, comparison of the results obtained by SFM and agarose gel electrophoresis methods has been performed in the present study. Sparsely ionising radiation, such as X-rays, and densely ionising radiation, such as carbon ions, have been used to irradiate plasmid DNA in trishydroxymethylaminomethane (Tris buffer) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES buffer) environments. These buffer environments exhibit different scavenging capacities for hydroxyl radical (HO0), which is produced by ionisation of water and plays the major role in the indirect DNA damage processes. Fragment distributions have been measured by SFM over a large length range, and as expected, a significantly higher degree of DNA damages was observed for increasing dose. Also a higher amount of double-strand breaks (DSBs) was observed after irradiation with carbon ions compared to X-ray irradiation. The results obtained from SFM measurements show that both types of radiation induce multiple fragmentation of the plasmid DNA in the dose range from D = 250 Gy to D = 1500 Gy. Using Tris environments at two different concentrations, a decrease of the relative biological effectiveness with the rise of Tris concentration was observed. This demonstrates the radioprotective behavior of the Tris buffer solution. In contrast, a lower scavenging capacity for all other free radicals and ions, produced by the ionisation of water, was registered in the case of HEPES buffer compared to Tris solution. This is reflected in the higher RBE values deduced from SFM and gel electrophoresis measurements after irradiation of the plasmid DNA in 20 mM HEPES environment compared to 92 mM Tris solution. These results show that HEPES and Tris environments play a major role on preventing the indirect DNA damages induced by ionising radiation and on the relative biological effectiveness of heavy ion radiation. In general, the RBE calculated from the SFM measurements presents higher values compared to gel electrophoresis data, for plasmids irradiated in all environments. Using a large set of data, obtained from the SFM measurements, it was possible to calculate the survive rate over a larger range, from 88% to 98%, while for gel electrophoresis measurements the survive rates have been calculated only for values between 96% and 99%. While the gel electrophoresis measurements provide information only about the percentage of plasmids DNA that suffered a single DSB, SFM can count the small plasmid fragments produced by multiple DSBs induced in a single plasmid. Consequently, SFM generates more detailed information regarding the amount of the induced DSBs compared to gel electrophoresis, and therefore, RBE can be calculated with more accuracy. Thus, SFM has been proven to be a more precise method to characterize on molecular level the DNA damage induced by ionizing radiations.
Resumo:
The objects with which the hand interacts with may significantly change the dynamics of the arm. How does the brain adapt control of arm movements to this new dynamic? We show that adaptation is via composition of a model of the task's dynamics. By exploring generalization capabilities of this adaptation we infer some of the properties of the computational elements with which the brain formed this model: the elements have broad receptive fields and encode the learned dynamics as a map structured in an intrinsic coordinate system closely related to the geometry of the skeletomusculature. The low--level nature of these elements suggests that they may represent asset of primitives with which a movement is represented in the CNS.
Resumo:
Using the MIT Serial Link Direct Drive Arm as the main experimental device, various issues in trajectory and force control of manipulators were studied in this thesis. Since accurate modeling is important for any controller, issues of estimating the dynamic model of a manipulator and its load were addressed first. Practical and effective algorithms were developed fro the Newton-Euler equations to estimate the inertial parameters of manipulator rigid-body loads and links. Load estimation was implemented both on PUMA 600 robot and on the MIT Serial Link Direct Drive Arm. With the link estimation algorithm, the inertial parameters of the direct drive arm were obtained. For both load and link estimation results, the estimated parameters are good models of the actual system for control purposes since torques and forces can be predicted accurately from these estimated parameters. The estimated model of the direct drive arm was them used to evaluate trajectory following performance by feedforward and computed torque control algorithms. The experimental evaluations showed that the dynamic compensation can greatly improve trajectory following accuracy. Various stability issues of force control were studied next. It was determined that there are two types of instability in force control. Dynamic instability, present in all of the previous force control algorithms discussed in this thesis, is caused by the interaction of a manipulator with a stiff environment. Kinematics instability is present only in the hybrid control algorithm of Raibert and Craig, and is caused by the interaction of the inertia matrix with the Jacobian inverse coordinate transformation in the feedback path. Several methods were suggested and demonstrated experimentally to solve these stability problems. The result of the stability analyses were then incorporated in implementing a stable force/position controller on the direct drive arm by the modified resolved acceleration method using both joint torque and wrist force sensor feedbacks.
Resumo:
In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The relationship of the anharmonic force constants in curvilinear internal coordinates to the observed vibration-rotation spectrum of a molecule is reviewed. A simplified method of setting up the required non-linear coordinate transformations is described: this makes use of an / tensor, which is a straightforward generalization of the / matrix used in the customary description of harmonic force constant calculations. General formulae for the / tensor elements, in terms of the familiar L matrix elements, are presented. The use of non-linear symmetry coordinates and redundancies are described. Sample calculations on the water and ammonia molecules are reported.
Resumo:
Interaction force constants between bond-stretching and angle-bending co-ordinates in polyatomic molecules have been attributed, by some authors, to changes of hybridization due to orbital-following of the bending co-ordinate, and consequent changes of bond length due to the change of hybridization. A method is described for using this model quantitatively to reduce the number of independent force constants in the potential function of a polyatomic molecule, by relating stretch-bend interaction constants to the corresponding diagonal stretching constants. It is proposed to call this model the Hybrid Orbital Force Field. The model is applied to the tetrahedral four co-ordinated carbon atom (as in methane) and to the trigonal planar three coordinated carbon atom (as in formaldehyde).
Resumo:
The mathematical difficulties which can arise in the force constant refinement procedure for calculating force constants and normal co-ordinates are described and discussed. The method has been applied to the methyl fluoride molecule, using an electronic computer. The best values of the twelve force constants in the most general harmonic potential field were obtained to fit twenty-two independently observed experimental data, these being the six vibration frequencies, three Coriolis zeta constants and two centrifugal stretching constants DJ and DJK, for both CH3F and CD3F. The calculations have been repeated both with and without anharmonicity corrections to the vibration frequencies. All the experimental data were weighted according to the reliability of the observations, and the corresponding standard errors and correlation coefficients of the force constants have been deduced. The final force constants are discussed briefly, and compared with previous treatments, particularly with a recent Urey-Bradley treatment for this molecule.
Resumo:
A method is discussed for imposing any desired constraint on the force field obtained in a force constant refinement calculation. The application of this method to force constant refinement calculations for the methyl halide molecules is reported. All available data on the vibration frequencies, Coriolis interaction constants and centrifugal stretching constants of CH3X and CD3X molecules were used in the refinements, but despite this apparent abundance of data it was found that constraints were necessary in order to obtain a unique solution to the force field. The results of unconstrained calculations, and of three different constrained calculations, are reported in this paper. The constrained models reported are a Urey—Bradley force field, a modified valence force field, and a constraint based on orbital-following bond-hybridization arguments developed in the following paper. The results are discussed, and compared with previous results for these molecules. The third of the above models is found to reproduce the observed data better than either of the first two, and additional reasons are given for preferring this solution to the force field for the methyl halide molecules.