919 resultados para Method of moments algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of the present study were 1) to compare results obtained by the traditional manual method of measuring heart rate (HR) and heart rate response (HRR) to the Valsalva maneuver, standing and deep breathing, with those obtained using a computerized data analysis system attached to a standard electrocardiograph machine; 2) to standardize the responses of healthy subjects to cardiovascular tests, and 3) to evaluate the response to these tests in a group of patients with diabetes mellitus (DM). In all subjects (97 healthy and 143 with DM) we evaluated HRR to deep breathing, HRR to standing, HRR to the Valsalva maneuver, and blood pressure response (BPR) to standing up and to a sustained handgrip. Since there was a strong positive correlation between the results obtained with the computerized method and the traditional method, we conclude that the new method can replace the traditional manual method for evaluating cardiovascular responses with the advantages of speed and objectivity. HRR and BPR of men and women did not differ. A correlation between age and HRR was observed for standing (r = -0.48, P<0.001) and deep breathing (r = -0.41, P<0.002). Abnormal BPR to standing was usually observed only in diabetic patients with definite and severe degrees of autonomic neuropathy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades, the chemical synthesis of short oligonucleotides has become an important aspect of study due to the discovery of new functions for nucleic acids such as antisense oligonucleotides (ASOs), aptamers, DNAzymes, microRNA (miRNA) and small interfering RNA (siRNA). The applications in modern therapies and fundamental medicine on the treatment of different cancer diseases, viral infections and genetic disorders has established the necessity to develop scalable methods for their cheaper and easier industrial manufacture. While small scale solid-phase oligonucleotide synthesis is the method of choice in the field, various challenges still remain associated with the production of short DNA and RNA-oligomers in very large quantities. On the other hand, solution phase synthesis of oligonucleotides offers a more predictable scaling-up of the synthesis and is amenable to standard industrial manufacture techniques. In the present thesis, various protocols for the synthesis of short DNA and RNA oligomers have been studied on a peracetylated and methylated β-cyclodextrin, and also on a pentaerythritol-derived support. On using the peracetylated and methylated β-cyclodextrin soluble supports, the coupling cycle was simplified by replacement of the typical 5′-O-(4,4′-dimethoxytrityl) protecting group with an acid-labile acetal-protected 5′-O-(1-methoxy-1-methylethyl) group, which upon acid-catalyzed methanolysis released easily removable volatile products. For this reason monomeric building blocks 5′-O-(1-methoxy-1-methylethyl) 3′-(2-cyano-ethyl-N,N-diisopropylphosphoramidite) were synthesized. Alternatively, on using the precipitative pentaerythritol support, novel 2´-O-(2-cyanoethyl)-5´-O-(1-methoxy-1-methylethyl) protected phosphoramidite building blocks for RNA synthesis have been prepared and their applicability by the synthesis of a pentamer was demonstrated. Similarly, a method for the preparation of short RNAs from commercially available 5´-O-(4,4´-dimethoxytrityl)-2´-O-(tert-butyldimethyl-silyl)ribonucleoside 3´-(2-cyanoethyl-N,N-diisopropylphosphoramidite) building blocks has been developed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We o¤er an axiomatization of the serial cost-sharing method of Friedman and Moulin (1999). The key property in our axiom system is Group Demand Monotonicity, asking that when a group of agents raise their demands, not all of them should pay less.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel approach to computing the orientation moments and rheological properties of a dilute suspension of spheroids in a simple shear flow at arbitrary Peclct number based on a generalised Langevin equation method. This method differs from the diffusion equation method which is commonly used to model similar systems in that the actual equations of motion for the orientations of the individual particles are used in the computations, instead of a solution of the diffusion equation of the system. It also differs from the method of 'Brownian dynamics simulations' in that the equations used for the simulations are deterministic differential equations even in the presence of noise, and not stochastic differential equations as in Brownian dynamics simulations. One advantage of the present approach over the Fokker-Planck equation formalism is that it employs a common strategy that can be applied across a wide range of shear and diffusion parameters. Also, since deterministic differential equations are easier to simulate than stochastic differential equations, the Langevin equation method presented in this work is more efficient and less computationally intensive than Brownian dynamics simulations.We derive the Langevin equations governing the orientations of the particles in the suspension and evolve a procedure for obtaining the equation of motion for any orientation moment. A computational technique is described for simulating the orientation moments dynamically from a set of time-averaged Langevin equations, which can be used to obtain the moments when the governing equations are harder to solve analytically. The results obtained using this method are in good agreement with those available in the literature.The above computational method is also used to investigate the effect of rotational Brownian motion on the rheology of the suspension under the action of an external force field. The force field is assumed to be either constant or periodic. In the case of con- I stant external fields earlier results in the literature are reproduced, while for the case of periodic forcing certain parametric regimes corresponding to weak Brownian diffusion are identified where the rheological parameters evolve chaotically and settle onto a low dimensional attractor. The response of the system to variations in the magnitude and orientation of the force field and strength of diffusion is also analyzed through numerical experiments. It is also demonstrated that the aperiodic behaviour exhibited by the system could not have been picked up by the diffusion equation approach as presently used in the literature.The main contributions of this work include the preparation of the basic framework for applying the Langevin method to standard flow problems, quantification of rotary Brownian effects by using the new method, the paired-moment scheme for computing the moments and its use in solving an otherwise intractable problem especially in the limit of small Brownian motion where the problem becomes singular, and a demonstration of how systems governed by a Fokker-Planck equation can be explored for possible chaotic behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a method of copy detection in short Malayalam text passages is proposed. Given two passages one as the source text and another as the copied text it is determined whether the second passage is plagiarized version of the source text. An algorithm for plagiarism detection using the n-gram model for word retrieval is developed and found tri-grams as the best model for comparing the Malayalam text. Based on the probability and the resemblance measures calculated from the n-gram comparison , the text is categorized on a threshold. Texts are compared by variable length n-gram(n={2,3,4}) comparisons. The experiments show that trigram model gives the average acceptable performance with affordable cost in terms of complexity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reinforcement Learning (RL) refers to a class of learning algorithms in which learning system learns which action to take in different situations by using a scalar evaluation received from the environment on performing an action. RL has been successfully applied to many multi stage decision making problem (MDP) where in each stage the learning systems decides which action has to be taken. Economic Dispatch (ED) problem is an important scheduling problem in power systems, which decides the amount of generation to be allocated to each generating unit so that the total cost of generation is minimized without violating system constraints. In this paper we formulate economic dispatch problem as a multi stage decision making problem. In this paper, we also develop RL based algorithm to solve the ED problem. The performance of our algorithm is compared with other recent methods. The main advantage of our method is it can learn the schedule for all possible demands simultaneously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper an attempt has been made to determine the number of Premature Ventricular Contraction (PVC) cycles accurately from a given Electrocardiogram (ECG) using a wavelet constructed from multiple Gaussian functions. It is difficult to assess the ECGs of patients who are continuously monitored over a long period of time. Hence the proposed method of classification will be helpful to doctors to determine the severity of PVC in a patient. Principal Component Analysis (PCA) and a simple classifier have been used in addition to the specially developed wavelet transform. The proposed wavelet has been designed using multiple Gaussian functions which when summed up looks similar to that of a normal ECG. The number of Gaussians used depends on the number of peaks present in a normal ECG. The developed wavelet satisfied all the properties of a traditional continuous wavelet. The new wavelet was optimized using genetic algorithm (GA). ECG records from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) database have been used for validation. Out of the 8694 ECG cycles used for evaluation, the classification algorithm responded with an accuracy of 97.77%. In order to compare the performance of the new wavelet, classification was also performed using the standard wavelets like morlet, meyer, bior3.9, db5, db3, sym3 and haar. The new wavelet outperforms the rest

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination