918 resultados para maximum contrast analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La leucémie aiguë lymphoblastique (LAL) est le cancer pédiatrique le plus fréquent. Elle est la cause principale de mortalité liée au cancer chez les enfants due à un groupe de patient ne répondant pas au traitement. Les patients peuvent aussi souffrir de plusieurs toxicités associées à un traitement intensif de chimiothérapie. Les études en pharmacogénétique de notre groupe ont montré une corrélation tant individuelle que combinée entre les variants génétiques particuliers d’enzymes dépendantes du folate, particulièrement la dihydrofolate réductase (DHFR) ainsi que la thymidylate synthase (TS), principales cibles du méthotrexate (MTX) et le risque élevé de rechute chez les patients atteints de la LAL. En outre, des variations dans le gène ATF5 impliqué dans la régulation de l’asparagine synthetase (ASNS) sont associées à un risque plus élevé de rechute ou à une toxicité ASNase dépendante chez les patients ayant reçu de l’asparaginase d’E.coli (ASNase). Le but principal de mon projet de thèse est de comprendre davantage d’un point de vue fonctionnel, le rôle de variations génétiques dans la réponse thérapeutique chez les patients atteints de la LAL, en se concentrant sur deux composants majeurs du traitement de la LAL soit le MTX ainsi que l’ASNase. Mon objectif spécifique était d’analyser une association trouvée dans des paramètres cliniques par le biais d’essais de prolifération cellulaire de lignées cellulaires lymphoblastoïdes (LCLs, n=93) et d’un modèle murin de xénogreffe de la LAL. Une variation génétique dans le polymorphisme TS (homozygosité de l’allèle de la répétition triple 3R) ainsi que l’haplotype *1b de DHFR (défini par une combinaison particulière d’allèle dérivé de six sites polymorphiques dans le promoteur majeur et mineur de DHFR) et de leurs effets sur la sensibilité au MTX ont été évalués par le biais d’essais de prolifération cellulaire. Des essais in vitro similaires sur la réponse à l’ASNase de E. Coli ont permis d’évaluer l’effet de la variation T1562C de la région 5’UTR de ATF5 ainsi que des haplotypes particuliers du gène ASNS (définis par deux variations génétiques et arbitrairement appelés haplotype *1). Le modèle murin de xénogreffe ont été utilisé pour évaluer l’effet du génotype 3R3R du gène TS. L’analyse de polymorphismes additionnels dans le gène ASNS a révélé une diversification de l’haplotype *1 en 5 sous-types définis par deux polymorphismes (rs10486009 et rs6971012,) et corrélé avec la sensibilité in vitro à l’ASNase et l’un d’eux (rs10486009) semble particulièrement important dans la réduction de la sensibilité in vitro à l’ASNase, pouvant expliquer une sensibilité réduite de l’haplotype *1 dans des paramètres cliniques. Aucune association entre ATF5 T1562C et des essais de prolifération cellulaire en réponse à ASNase de E.Coli n’a été détectée. Nous n’avons pas détecté une association liée au génotype lors d’analyse in vitro de sensibilité au MTX. Par contre, des résultats in vivo issus de modèle murin de xénogreffe ont montré une relation entre le génotype TS 3R/3R et la résistance de manière dose-dépendante au traitement par MTX. Les résultats obtenus ont permis de fournir une explication concernant un haut risque significatif de rechute rencontré chez les patients au génotype TS 3R/3R et suggèrent que ces patients pourraient recevoir une augmentation de leur dose de MTX. À travers ces expériences, nous avons aussi démontré que les modèles murins de xénogreffe peuvent servir comme outil préclinique afin d’explorer l’option d’un traitement individualisé. En conclusion, la connaissance acquise à travers mon projet de thèse a permis de confirmer et/ou d’identifier quelques variants dans la voix d’action du MTX et de l’ASNase qui pourraient faciliter la mise en place de stratégies d’individualisation de la dose, permettant la sélection d’un traitement optimum ou moduler la thérapie basé sur la génétique individuelle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Après des décennies de développement, l'ablation laser est devenue une technique importante pour un grand nombre d'applications telles que le dépôt de couches minces, la synthèse de nanoparticules, le micro-usinage, l’analyse chimique, etc. Des études expérimentales ainsi que théoriques ont été menées pour comprendre les mécanismes physiques fondamentaux mis en jeu pendant l'ablation et pour déterminer l’effet de la longueur d'onde, de la durée d'impulsion, de la nature de gaz ambiant et du matériau de la cible. La présente thèse décrit et examine l'importance relative des mécanismes physiques qui influencent les caractéristiques des plasmas d’aluminium induits par laser. Le cadre général de cette recherche forme une étude approfondie de l'interaction entre la dynamique de la plume-plasma et l’atmosphère gazeuse dans laquelle elle se développe. Ceci a été réalisé par imagerie résolue temporellement et spatialement de la plume du plasma en termes d'intensité spectrale, de densité électronique et de température d'excitation dans différentes atmosphères de gaz inertes tel que l’Ar et l’He et réactifs tel que le N2 et ce à des pressions s’étendant de 10‾7 Torr (vide) jusqu’à 760 Torr (pression atmosphérique). Nos résultats montrent que l'intensité d'émission de plasma dépend généralement de la nature de gaz et qu’elle est fortement affectée par sa pression. En outre, pour un délai temporel donné par rapport à l'impulsion laser, la densité électronique ainsi que la température augmentent avec la pression de gaz, ce qui peut être attribué au confinement inertiel du plasma. De plus, on observe que la densité électronique est maximale à proximité de la surface de la cible où le laser est focalisé et qu’elle diminue en s’éloignant (axialement et radialement) de cette position. Malgré la variation axiale importante de la température le long du plasma, on trouve que sa variation radiale est négligeable. La densité électronique et la température ont été trouvées maximales lorsque le gaz est de l’argon et minimales pour l’hélium, tandis que les valeurs sont intermédiaires dans le cas de l’azote. Ceci tient surtout aux propriétés physiques et chimiques du gaz telles que la masse des espèces, leur énergie d'excitation et d'ionisation, la conductivité thermique et la réactivité chimique. L'expansion de la plume du plasma a été étudiée par imagerie résolue spatio-temporellement. Les résultats montrent que la nature de gaz n’affecte pas la dynamique de la plume pour des pressions inférieures à 20 Torr et pour un délai temporel inférieur à 200 ns. Cependant, pour des pressions supérieures à 20 Torr, l'effet de la nature du gaz devient important et la plume la plus courte est obtenue lorsque la masse des espèces du gaz est élevée et lorsque sa conductivité thermique est relativement faible. Ces résultats sont confirmés par la mesure de temps de vol de l’ion Al+ émettant à 281,6 nm. D’autre part, on trouve que la vitesse de propagation des ions d’aluminium est bien définie juste après l’ablation et près de la surface de la cible. Toutefois, pour un délai temporel important, les ions, en traversant la plume, se thermalisent grâce aux collisions avec les espèces du plasma et du gaz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of dietary sodium restriction on perceived intensity of and preference for the taste of salt was evaluated in 76 adults, 25-49 years, with diastolic blood pressure between 79-90 mmHg. Participants were volunteers from clinical Hypertension Prevention Trials (HPT), at the University of California, Davis and the University of Minnesota, Minneapolis. Participants followed one of four HPT diets: 1600 mg Na+/day (NA, n=lS), 1600 mg Na+ plus 3200 mg K+/day (NK, n=lS), 1600 mg Na+/day plus energy restriction to achieve weight loss (NW, n=l3) and weight loss only (WT, n=l3). All participants attended regularly scheduled nutri­tion intervention meetings designed to help them achieve the HPT dietary goals. A fifth, no-intervention group, consisted of 20, no-diet-change controls CCN). Sodium, potassium and energy intakes were monitored by analysis of single, 24-hour food records and corresponding overnight urine speci­mens, obtained at baseline and after 12 and 24 weeks of intervention. Hedonic responses to sodium chloride in a prepared cream of green bean soup were assessed by two methods : 1) scaling of like/dislike for an NaCl concentration series on 10-cm graphie line scales and 2) ad libitum mixing of unsalted and salted soups to maximum level of liking. Salt content of the mixes was analyzed by sodium ion-selective electrode. The concentration series was also rated for perceived saltiness­intensity on similar graphie line scales. Tests were conducted at base­line and after approximately 1, 3, 6, 8, 10, 13 and 24 weeks of intervention. Reduction in sodium intake and excretion in NA, NK and NW partici­pants was accompanied by a shift in preference toward less saltiness in soup. The pattern of hedonic responses changed over time: scores for high NaCl concentrations decreased progressively while scores for low concentrations increased. Hedonic maxima shifted fran a concentration of 0.55% at the onset to 0.1-0.2% added NaCl at week 24. During the same time period, the preferred concentration of ad libitum mixes declined 50%. These shifts occurred independently of changes in salti­ness intensity ratings, potassium or energy intakes, and were consistent across the two participating study sites. Like/dislike and sd. libitum responses were similar after 13 and 24 weeks of diet, as were measures of sodium intake and excretion. These findings suggest that after three months of sodium restriction, preference for salt had readjusted to a lower level, reflective of lower sodium intake. Mechanisms underlying the change in preference are unclear, but may include sensory, context, physiological as well as behavioral effects. In contrast, few changes were noted within WT and CN groups. The pattern of hedonic responses varied little in controls while the WT group showed increased liking for mid-range NaCl concentrations. Small, but significant fluctuations in ad libitum mix concentration occurred in both of these groups, but the differences appeared to be random rather than systematic. The results of this study indicate that preference for the taste of salt declines progressively toward a new baseline following reductions in sodium intake. These alterations may enhance maintenance of low­sodium diets for the treatment and prevention of hypertension. Further investigation is needed to establish the degree to which long-term com­pliance is contingent upon variation in salt taste preference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous avons cherché des relations empiriques entre l’abondance des macrophytes submergés et le développement résidentiel du bassin versant, les propriétés du lac et la présence de milieux humides dans 34 lacs de la région des Laurentides et de Lanaudière sélectionnés à travers un gradient de développement résidentiel. Les macrophytes submergés ont été échantillonnés par méthode d’échosondage à l’intérieur de la zone littorale. L’abondance moyenne des macrophytes a ensuite été estimée à l’intérieur de quatre zones de croissance optiquement définies (profondeur maximale = 75 %, 100 %, 125 % et 150 % de la profondeur de Secchi) ainsi qu’à l’intérieur de toute la zone littorale. L’occupation humaine a été considérée selon trois échelles spatiales : celle présente 1- dans un rayon de 100 mètres autour du lac, 2- dans la fraction du bassin versant qui draine directement vers le lac et 3- dans le bassin versant en entier. Nous avons aussi testé, lac par lac, l’effet de la pente locale sur l’abondance des macrophytes. Nous avons observé des corrélations positives et significatives entre l’abondance des macrophytes submergés et l’occupation humaine de l’aire de drainage direct (r > 0.51). Toutefois, il n’y a pas de relation entre l’abondance des macrophytes submergés et l’occupation humaine de la bande de 100 mètres entourant le lac et du bassin versant entier. Les analyses de régression multiple suggèrent que l’abondance des macrophytes submergés est faiblement corrélée avec l’aire du lac (+) et avec la présence de milieux humides dans le bassin versant entier (-). Localement, l’abondance des macrophytes est reliée à la pente et à la profondeur qui expliquent 21% de la variance. Les profondeurs de colonisation maximale et optimale des macrophytes submergés sont corrélées positivement au temps de résidence et à la profondeur de Secchi et négativement à l’occupation humaine et à l’importance des milieux humides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study Design. Reliability study. Objectives. To assess between-acquisition reliability of new multilevel trunk cross sections measurements, in order to define what is a real change when comparing 2 trunk surface acquisitions of a same patient, before and after surgery or throughout the clinical monitoring. Summary of Background Data. Several cross-sectional surface measurements have been proposed in the literature for noninvasive assessment of trunk deformity in patients with adolescent idiopathic scoliosis (AIS). However, only the maximum values along the trunk are evaluated and used for monitoring progression and assessing treatment outcome. Methods. Back surface rotation (BSR), trunk rotation (TR), and coronal and sagittal trunk deviation are computed on 300 cross sections of the trunk. Each set of 300 measures is represented as a single functional data, using a set of basis functions. To evaluate between-acquisition variability at all trunk levels, a test-retest reliability study is conducted on 35 patients with AIS. A functional correlation analysis is also carried out to evaluate any redundancy between the measurements. Results. Each set of 300 measures was successfully described using only 10 basis functions. The test-retest reliability of the functional measurements is good to very good all over the trunk, except above the shoulders level. The typical errors of measurement are between 1.20° and 2.2° for the rotational measures and between 2 and 6 mm for deviation measures. There is a very strong correlation between BSR and TR all over the trunk, a moderate correlation between coronal trunk deviation and both BSR and TR, and no correlation between sagittal trunk deviation and any other measurement. Conclusion. This novel representation of trunk surface measurements allows for a global assessment of trunk surface deformity. Multilevel trunk measurements provide a broader perspective of the trunk deformity and allow a reliable multilevel monitoring during clinical follow-up of patients with AIS and a reliable assessment of the esthetic outcome after surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is devoted to the study of some stochastic models in inventories. An inventory system is a facility at which items of materials are stocked. In order to promote smooth and efficient running of business, and to provide adequate service to the customers, an inventory materials is essential for any enterprise. When uncertainty is present, inventories are used as a protection against risk of stock out. It is advantageous to procure the item before it is needed at a lower marginal cost. Again, by bulk purchasing, the advantage of price discounts can be availed. All these contribute to the formation of inventory. Maintaining inventories is a major expenditure for any organization. For each inventory, the fundamental question is how much new stock should be ordered and when should the orders are replaced. In the present study, considered several models for single and two commodity stochastic inventory problems. The thesis discusses two models. In the first model, examined the case in which the time elapsed between two consecutive demand points are independent and identically distributed with common distribution function F(.) with mean  (assumed finite) and in which demand magnitude depends only on the time elapsed since the previous demand epoch. The time between disasters has an exponential distribution with parameter . In Model II, the inter arrival time of disasters have general distribution (F.) with mean  ( ) and the quantity destructed depends on the time elapsed between disasters. Demands form compound poison processes with inter arrival times of demands having mean 1/. It deals with linearly correlated bulk demand two Commodity inventory problem, where each arrival demands a random number of items of each commodity C1 and C2, the maximum quantity demanded being a (< S1) and b(

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the applications of the recurrence quantification analysis in metal cutting operation in a lathe, with specific objective to detect tool wear and chatter, are presented.This study is based on the discovery that process dynamics in a lathe is low dimensional chaotic. It implies that the machine dynamics is controllable using principles of chaos theory. This understanding is to revolutionize the feature extraction methodologies used in condition monitoring systems as conventional linear methods or models are incapable of capturing the critical and strange behaviors associated with the metal cutting process.As sensor based approaches provide an automated and cost effective way to monitor and control, an efficient feature extraction methodology based on nonlinear time series analysis is much more demanding. The task here is more complex when the information has to be deduced solely from sensor signals since traditional methods do not address the issue of how to treat noise present in real-world processes and its non-stationarity. In an effort to get over these two issues to the maximum possible, this thesis adopts the recurrence quantification analysis methodology in the study since this feature extraction technique is found to be robust against noise and stationarity in the signals.The work consists of two different sets of experiments in a lathe; set-I and set-2. The experiment, set-I, study the influence of tool wear on the RQA variables whereas the set-2 is carried out to identify the sensitive RQA variables to machine tool chatter followed by its validation in actual cutting. To obtain the bounds of the spectrum of the significant RQA variable values, in set-i, a fresh tool and a worn tool are used for cutting. The first part of the set-2 experiments uses a stepped shaft in order to create chatter at a known location. And the second part uses a conical section having a uniform taper along the axis for creating chatter to onset at some distance from the smaller end by gradually increasing the depth of cut while keeping the spindle speed and feed rate constant.The study concludes by revealing the dependence of certain RQA variables; percent determinism, percent recurrence and entropy, to tool wear and chatter unambiguously. The performances of the results establish this methodology to be viable for detection of tool wear and chatter in metal cutting operation in a lathe. The key reason is that the dynamics of the system under study have been nonlinear and the recurrence quantification analysis can characterize them adequately.This work establishes that principles and practice of machining can be considerably benefited and advanced from using nonlinear dynamics and chaos theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural systems are inherently non linear. Recurrent behaviours are typical of natural systems. Recurrence is a fundamental property of non linear dynamical systems which can be exploited to characterize the system behaviour effectively. Cross recurrence based analysis of sensor signals from non linear dynamical system is presented in this thesis. The mutual dependency among relatively independent components of a system is referred as coupling. The analysis is done for a mechanically coupled system specifically designed for conducting experiment. Further, cross recurrence method is extended to the actual machining process in a lathe to characterize the chatter during turning. The result is verified by permutation entropy method. Conventional linear methods or models are incapable of capturing the critical and strange behaviours associated with the dynamical process. Hence any effective feature extraction methodologies should invariably gather information thorough nonlinear time series analysis. The sensor signals from the dynamical system normally contain noise and non stationarity. In an effort to get over these two issues to the maximum possible extent, this work adopts the cross recurrence quantification analysis (CRQA) methodology since it is found to be robust against noise and stationarity in the signals. The study reveals that the CRQA is capable of characterizing even weak coupling among system signals. It also divulges the dependence of certain CRQA variables like percent determinism, percent recurrence and entropy to chatter unambiguously. The surrogate data test shows that the results obtained by CRQA are the true properties of the temporal evolution of the dynamics and contain a degree of deterministic structure. The results are verified using permutation entropy (PE) to detect the onset of chatter from the time series. The present study ascertains that this CRP based methodology is capable of recognizing the transition from regular cutting to the chatter cutting irrespective of the machining parameters or work piece material. The results establish this methodology to be feasible for detection of chatter in metal cutting operation in a lathe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antimicrobial peptides (AMPs) play a major role in innate immunity. Penaeidins are a family of AMPs that appear to be expressed in all penaeid shrimps. Penaeidins are composed of an N-terminal proline-rich domain, followed by a C-terminal domain containing six cysteine residues organized in two doublets. This study reports the first penaeidin AMP sequence, Fi-penaeidin (GenBank accession number HM243617) from the Indian white shrimp, Fenneropenaeus indicus. The full length cDNA consists of 186 base pairs encoding 61 amino acidswith an ORF of 42 amino acids and contains a putative signal peptide of 19 amino acids. Comparison of F. indicus penaeidin (Fi-penaeidin) with other known penaeidins showed that it shared maximum similarity with penaeidins of Farfantepenaeus paulensis and Farfantepenaeus subtilis (96% each). Fi-penaeidin has a predicted molecular weight (MW) of 4.478 kDa and theoretical isoelectric point (pI) of 5.3

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design and analysis of a 400-step hybrid stepper motor for spacecraft applications. The design of the hybrid stepper motor for achieving a specific performance requires the choice of appropriate tooth geometry. In this paper, a detailed account of the results of two-dimensional finite-element (FE) analysis conducted with different tooth shapes such as square and trapezoidal, is presented. The use of % more corresponding increase in detent torque and distorted static torque profile. For the requirements of maximum torque density, less-detent torque, and better positional accuracy and smooth static torque profile, different pitch slotting with equal tooth width has to be provided. From the various FE models subjected to analysis trapezoidal teeth configuration with unequal tooth pitch on the stator and rotor is found to be the best configuration and is selected for fabrication. The designed motor is fabricated and the experimental results is compared with the FE results