998 resultados para Diagnostic Algorithms
Resumo:
Introduction : L’asthme professionnel (AP) est diagnostiqué au Québec avec le test de provocation bronchique spécifique (TPS). Le TPS consiste à exposer le patient à un agent causal suspecté en vue de provoquer une réaction asthmatique. Un TPS négatif est possible quand un agent causal a été omis de l’histoire professionnelle du patient. L’évaluation des expositions professionnelles par une expertise en hygiène en santé du travail est considérée comme une méthode précise, lorsque des données de mesure ne sont pas disponibles. Cependant, l'apport de cette méthode dans le diagnostic de l’AP n'a jamais été examiné dans un contexte clinique. Objectifs : Déterminer l'apport de l'évaluation des expositions professionnelles par une expertise en hygiène du travail dans l'investigation de l'AP. Comparer les expositions professionnelles détectées par un clinicien et par un hygiéniste chez 1) des sujets avec de l’AP prouvé par des TPS positifs, 2) chez des sujets avec des TPS négatifs. Méthodes : Une analyse des expositions potentielles par le clinicien a précédé la réalisation du TPS. Une évaluation des expositions professionnelles a été réalisée par un hygiéniste. L’hygiéniste n’avait pas connaissance du diagnostic du patient. Résultats : 120 sujets (TPS positifs : 67 négatifs :53) ont été enrôlés dans l’étude. L’hygiéniste a identifié l’agent causal dans la très grande majorité des TPS positifs. Dans 33 TPS négatifs, l’hygiéniste a détecté des agents sensibilisants non identifiés par le médecin. Conclusion : L’évaluation des expositions professionnelles par une expertise en hygiène du travail est une méthode pouvant compléter l'évaluation clinique pour la détection d’agents sensibilisants associés à l’AP. L’inclusion de cette approche dans l’évaluation clinique de l’AP aurait comme effet de réduire la survenance d’un diagnostic erroné.
Resumo:
Les algorithmes d'apprentissage profond forment un nouvel ensemble de méthodes puissantes pour l'apprentissage automatique. L'idée est de combiner des couches de facteurs latents en hierarchies. Cela requiert souvent un coût computationel plus elevé et augmente aussi le nombre de paramètres du modèle. Ainsi, l'utilisation de ces méthodes sur des problèmes à plus grande échelle demande de réduire leur coût et aussi d'améliorer leur régularisation et leur optimization. Cette thèse adresse cette question sur ces trois perspectives. Nous étudions tout d'abord le problème de réduire le coût de certains algorithmes profonds. Nous proposons deux méthodes pour entrainer des machines de Boltzmann restreintes et des auto-encodeurs débruitants sur des distributions sparses à haute dimension. Ceci est important pour l'application de ces algorithmes pour le traitement de langues naturelles. Ces deux méthodes (Dauphin et al., 2011; Dauphin and Bengio, 2013) utilisent l'échantillonage par importance pour échantilloner l'objectif de ces modèles. Nous observons que cela réduit significativement le temps d'entrainement. L'accéleration atteint 2 ordres de magnitude sur plusieurs bancs d'essai. Deuxièmement, nous introduisont un puissant régularisateur pour les méthodes profondes. Les résultats expérimentaux démontrent qu'un bon régularisateur est crucial pour obtenir de bonnes performances avec des gros réseaux (Hinton et al., 2012). Dans Rifai et al. (2011), nous proposons un nouveau régularisateur qui combine l'apprentissage non-supervisé et la propagation de tangente (Simard et al., 1992). Cette méthode exploite des principes géometriques et permit au moment de la publication d'atteindre des résultats à l'état de l'art. Finalement, nous considérons le problème d'optimiser des surfaces non-convexes à haute dimensionalité comme celle des réseaux de neurones. Tradionellement, l'abondance de minimum locaux était considéré comme la principale difficulté dans ces problèmes. Dans Dauphin et al. (2014a) nous argumentons à partir de résultats en statistique physique, de la théorie des matrices aléatoires, de la théorie des réseaux de neurones et à partir de résultats expérimentaux qu'une difficulté plus profonde provient de la prolifération de points-selle. Dans ce papier nous proposons aussi une nouvelle méthode pour l'optimisation non-convexe.
Resumo:
Le présent projet doctoral vise à considérer les lacunes dans la documentation scientifique sur le Trouble Paraphilique Coercitif (TPC) en mettant l’accent sur la validité des critères diagnostiques proposés pour inclusion dans le DSM-5 et les marqueurs comportementaux. À ce fait, les données archivées d’individus ayant sexuellement agressé des femmes adultes ont été étudiées. La thèse est constituée de trois articles empiriques. Le premier article présente des résultats clés découlant des analyses, élaborés dans les articles subséquents. Le second (N = 47) évalue les fréquences observées du TPC, la validité et l’impact du recours au nombre minimal de victimes comme critère diagnostique, ainsi que les indices prédisant la récidive sexuelle. Le troisième article (N = 52) compare les groupes diagnostiques sur une série de comportements délictuels, tels que les gestes sexuels et les comportements violents, dans le but d’identifier les marqueurs comportementaux associés avec la propension au viol qui pourraient assister dans le processus diagnostique. Dans le même ordre d’idées, nous avons créé des typologies de violeurs à partir des gestes sexuels commis, d’un côté, et des comportements violents, de l’autre côté. Conséquemment, les caractéristiques des typologies ainsi obtenues et leur association avec le TPC furent examinées. Dans l’ensemble, nos résultats ne soutiennent pas le recours au nombre de victimes. Nos données suggèrent que, globalement, les violeurs avec le TPC utilisent un niveau de gestes sexuels plus envahissant et un niveau de violence moindre que les violeurs n’ayant pas ce diagnostic, et que l’exhibitionnisme et l’attouchement pourraient servir de marqueurs comportementaux pour le TPC. En outre, les violeurs avec le TPC sont caractérisés davantage par demande indécente, exhibitionnisme, attouchement, masturbation, tentative de pénétration et pénétration digitale que par pénétration vaginale et sodomie. De plus, ces derniers font moins recours à l’utilisation d’armes, semblent ne pas frapper/donner des coups à la victime et sont caractérisés par la manipulation plutôt que par le recours aux menaces de mort, force excessive et utilisation d’armes. En somme, nos données soulignent la nécessité de s’appuyer sur une combinaison de méthodes d’évaluation afin d’améliorer la validité diagnostique et discriminante du TPC.
Resumo:
La Fibrose kystique (FK) est une maladie génétique qui se traduit par une destruction progressive des poumons et éventuellement, à la mort. La principale complication secondaire est le diabète associé à la FK (DAFK). Une dégradation clinique (perte de poids et de la fonction pulmonaire) accélérée est observée avant le diagnostic. L’objectif principal de mon projet de doctorat est de déterminer, par l’intermédiaire du test d’hyperglycémie provoquée par voie orale (HGPO), s’il existe un lien entre l’hyperglycémie et/ou l’hypoinsulinémie et la dégradation clinique observée avant le diagnostic du DAFK. Nous allons ainsi évaluer l’importance des temps intermédiaires de l’HGPO afin de simplifier le diagnostic d’une dysglycémie ainsi que d’établir des nouveaux marqueurs indicateurs de patients à risque d’une détérioration clinique. L’HGPO est la méthode standard utilisée dans la FK pour le diagnostic du DAFK. Nous avons démontré que les valeurs de glycémie obtenues au temps 90-min de l’HGPO seraient suffisantes pour prédire la tolérance au glucose des patients adultes avec la FK, autrement établie à l’aide des valeurs à 2-h de l’HGPO. Nous proposons des glycémies à 90-min de l’HGPO supérieure à 9.3 mmol/L et supérieure à 11.5 mmol/L pour détecter l’intolérance au glucose et le DAFK, respectivement. Une cause importante du DAFK est un défaut de la sécrétion d’insuline. Les femmes atteintes de la FK ont un risque plus élevé de développer le DAFK que les hommes, nous avons donc exploré si leur sécrétion était altérée. Contrairement à notre hypothèse, nous avons observé que les femmes avec la FK avaient une sécrétion d’insuline totale plus élevée que les hommes avec la FK, mais à des niveaux comparables aux femmes en santé. Le groupe de tolérance au glucose récemment proposé et nommé indéterminé (INDET : 60-min HGPO > 11.0 mais 2h-HGPO <7.8mmol/L) est à risque élevé de développer le DAFK. Par contre, les caractéristiques cliniques de ce groupe chez les patients adultes avec la FK n’ont pas été établies. Nous avons observé que le groupe INDET a une fonction pulmonaire réduite et similaire au groupe DAFK de novo et aucun des paramètres glucidiques et insulinémiques expliqueraient cette observation. Dans une population pédiatrique de patients avec la FK, une association a été rapportée entre une glycémie élevée à 60-min de l’HGPO et une fonction pulmonaire diminuée. Dans notre groupe de patients adultes avec la FK, il existe une association négative entre la glycémie à 60-min de l’HGPO et la fonction pulmonaire et une corrélation positive entre l’insulinémie à 60-min de l’HGPO et l’indice de masse corporelle (IMC). De plus, les patients avec une glycémie à 60-min HGPO > 11.0 mmol/L ont une fonction pulmonaire diminuée et une sensibilité à l’insuline basse alors que ceux avec une insulinémie à 60-min HGPO < 43.4 μU/mL ont un IMC ainsi qu’une fonction pulmonaire diminués. En conclusion, nous sommes le premier groupe à démontrer que 1) le test d’HGPO peut être raccourci de 30 min sans compromettre la catégorisation de la tolérance au glucose, 2) les femmes avec la FK démontrent une préservation de leur sécrétion de l’insuline, 3) le groupe INDET présente des anomalies précoces de la fonction pulmonaire comparable au groupe DAFK de novo et 4) la glycémie et l’insuline à la première heure de l’HGPO sont associées aux deux éléments clefs de la dégradation clinique. Il est crucial d’élucider les mécanismes pathophysiologiques importants afin de mieux prévoir la survenue de la dégradation clinique précédant le DAFK.
Resumo:
This study focuses on the onset of southwest monsoon over Kerala. India Meteorological Department (IMD) has been using a semi-objective method to define monsoon onset. The main objectives of the study are to understand the monsoon onset processes, to simulate monsoon onset in a GCM using as input the atmospheric conditions and Sea Surface Temperature, 10 days earlier to the onset, to develop a method for medium range prediction of the date of onset of southwest monsoon over Kerala and to examine the possibility of objectively defining the date of Monsoon Onset over Kerala (MOK). It gives a broad description of regional monsoon systems and monsoon onsets over Asia and Australia. Asian monsoon includes two separate subsystems, Indain monsoon and East Asian monsoon. It is seen from this study that the duration of the different phases of the onset process are dependent on the period of ISO. Based on the study of the monsoon onset process, modeling studies can be done for better understanding of the ocean-atmosphere interaction especially those associated with the warm pool in the Bay of Bengal and the Arabian Sea.
Resumo:
The main objective of the work undertaken here was to develop an appropriate microbial technology to protect the larvae of M.rosenbergii in hatchery from vibriosis. This technology precisely is consisted of a rapid detection system of vibrios and effective antagonistic probiotics for the management of vibrios. The present work was undertaken with the realizations that to stabilize the production process of commercial hatcheries an appropriate, comprehensive and fool proof technology is required primarily for the rapid detection of Vibrio and subsequently for its management. Nine species of Vibrio have been found to be associated with larvae of M. rosenbergii in hatchery. Haemolytic assay of the Vibrio and Aeromonas on prawn blood agar showed that all isolates of V. alginolyticus and Aeromonas sp., from moribund, necrotized larve were haemolytic and the isolates of V.cholerae, V.splendidus II, V.proteolyticus and V.fluvialis from the larvae obtained from apparently healthy larval rearing systems were non-haemolytic. Hydrolytic enzymes such as lipase, chitinase and gelatinase were widespread amongst the Vibrio and Aeromonas isolates. Dominance of V.alginolyticus among the isolates from necrotic larvae and the failure in isolating them from rearing water strongly suggest that they infect larvae and multiply in the larval body and cause mortality in the hatchery. The observation suggested that the isolate V. alginolyticus was a pathogen to the larvae of M.rosenbergii. To sum up, through this work, nine species of Vibrio and genus Aeromonas associated with M.rosenbergii larval rearing systems could be isolated and segregated based on the haemolytic activity and the antibodies (PA bs) for use in diagnosis or epidemiological studies could be produced, based on a virulent culture of V.alginolyticus. This could possibly replace the conventional biochemical tests for identification. As prophylaxis to vibriosis, four isolates of Micrococcus spp. and an isolate of Pseudomonas sp. could be obtained which could possibly be used as antagonistic probiotics in the larval rearing system of M.rosenbergii.
Resumo:
TRMM Microwave Imager (TMI) is reported to be a useful sensor to measure the atmospheric and oceanic parameters even in cloudy conditions. Vertically integrated specific humidity, Total Precipitable Water (TPW) retrieved from the water vapour absorption channel (22GHz.) along with 10m wind speed and rain rate derived from TMI is used to investigate the moisture variation over North Indian Ocean. Intraseasonal Oscillations (ISO) of TPW during the summer monsoon seasons 1998, 1999, and 2000 over North Indian Ocean is explored using wavelet analysis. The dominant waves in TPW during the monsoon periods and the differences in ISO over Arabian Sea and Bay of Bengal are investigated. The northward propagation of TPW anomaly and its coherence with the coastal rainfall is also studied. For the diagnostic study of heavy rainfall spells over the west coast, the intrusion of TPW over the North Arabian Sea is seen to be a useful tool.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
Internet today has become a vital part of day to day life, owing to the revolutionary changes it has brought about in various fields. Dependence on the Internet as an information highway and knowledge bank is exponentially increasing so that a going back is beyond imagination. Transfer of critical information is also being carried out through the Internet. This widespread use of the Internet coupled with the tremendous growth in e-commerce and m-commerce has created a vital need for infonnation security.Internet has also become an active field of crackers and intruders. The whole development in this area can become null and void if fool-proof security of the data is not ensured without a chance of being adulterated. It is, hence a challenge before the professional community to develop systems to ensure security of the data sent through the Internet.Stream ciphers, hash functions and message authentication codes play vital roles in providing security services like confidentiality, integrity and authentication of the data sent through the Internet. There are several ·such popular and dependable techniques, which have been in use widely, for quite a long time. This long term exposure makes them vulnerable to successful or near successful attempts for attacks. Hence it is the need of the hour to develop new algorithms with better security.Hence studies were conducted on various types of algorithms being used in this area. Focus was given to identify the properties imparting security at this stage. By making use of a perception derived from these studies, new algorithms were designed. Performances of these algorithms were then studied followed by necessary modifications to yield an improved system consisting of a new stream cipher algorithm MAJE4, a new hash code JERIM- 320 and a new message authentication code MACJER-320. Detailed analysis and comparison with the existing popular schemes were also carried out to establish the security levels.The Secure Socket Layer (SSL) I Transport Layer Security (TLS) protocol is one of the most widely used security protocols in Internet. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL I TLS. But recent attacks on RC4 and HMAC have raised questions about the reliability of these algorithms. Hence MAJE4 and MACJER-320 have been proposed as substitutes for them. Detailed studies on the performance of these new algorithms were carried out; it has been observed that they are dependable alternatives.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining
Resumo:
We develop several algorithms for computations in Galois extensions of p-adic fields. Our algorithms are based on existing algorithms for number fields and are exact in the sense that we do not need to consider approximations to p-adic numbers. As an application we describe an algorithmic approach to prove or disprove various conjectures for local and global epsilon constants.