983 resultados para Modified Berlekamp-Massey algorithm
Resumo:
Tämän diplomityön tavoitteena on kehittää sopiva analyyttinen menetelmä muokatun kraft-sellukuidun substituutioasteen (DS) kvantitatiivista määrittämistä varten. Muokkauksella tarkoitetaan tässä yhteydessä joko kovalenttisesti tai adsorption avulla tapahtuvaa molekyylin kiinnittymistä sellukuidun pinnalle. Työn kirjallisuusosuudessa käsitellään lyhyesti eri muokkaustapoja ja yhdisteitä joiden avulla voidaan saavuttaa haluttuja ominaisuuksia sellusta valmistetuille lopputuotteille. Lisäksi kirjallisuusosuudessa käydään läpi käyttötarkoitukseen soveltuvimpia suoria ja epäsuoria analyysimenetelmiä. Analyysimenetelmistä kaikkein lupaavimpia testattiin työn kokeellisessa osassa. Diplomityön kokeellisessa osassa keskityttiin kehittämään muokatulle sellulle kvantitatiivista menetelmää DS:n määrittämiseksi Fourier-muunnos infrapuna-vaimennettu kokonaisheijastus (FTIR-ATR) spektrometrillä. Kirjallisuuskatsauksessa ei löytynyt yhtään dokumentoitua tutkimusta, jossa FTIR-ATR menetelmää olisi käytetty muokatun sellukuidun kvantitatiiviseen tutkimukseen. Muiden analyysimenetelmien, kuten alkuaineanalyysin, termogravimetrisen analyysin (TGA) ja valomikroskopian avulla pyrittiin tuottamaan lisätietoa muokkauksesta. Kvantitatiivisen FTIR-ATR menetelmän kehitykseen käytetyt muokatut sellukuidut olivat selluloosa-asetaattia ja selluloosa betainaattia. Saatujen tulosten perusteella muokattujen sulfiitti- ja kraft sellukuitujen DS:n kvantitatiivinen määrittäminen on mahdollista FTIR-ATR menetelmällä. Vähäinen kalibrointipisteiden määrä vaikeutti tarkan analyysimenetelmän tekemistä. Kehitetyn menetelmän suurimpina ongelmina olivat kiinteiden näytteiden heterogeenisyys sekä mahdollisten epäpuhtauksien tunnistaminen. Jatkotutkimusten avulla kehitettyä menetelmää on kuitenkin mahdollista käyttää muokattujen sellukuitujen jatkuvaan analysointiin selluteollisuudessa.
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.
Resumo:
The difficulty on identifying, lack of segregation systems and absence of suitable standards for coexistence of non trangenic and transgenic soybean are contributing for contaminations that occur during productive system. The objective of this study was to evaluate the efficiency of two methods for detecting mixtures of seeds genetically modified (GM) into samples of non-GM soybean, in a way that seed lots can be assessed within the standards established by seed legislation. Two sizes of soybean samples (200 and 400 seeds), cv. BRSMG 810C (non-GM) and BRSMG 850GRR (GM), were assessed with four contamination levels (addition of GM seeds, for obtaining 0.0%, 0.5%, 1.0%, and 1.5% contamination), and two detection methods: immunoassay of lateral flux (ILF) and bioassay (pre-imbibition into 0.6% herbicide solution; 25 ºC; 16 h). The bioassay is efficient in detecting presence of GM seeds in seed samples of non-GM soybean, even for contamination lower than 1.0%, provided that seeds have high physiological quality. The ILF was positive, detecting the presence of target protein in contaminated samples, indicating test effectiveness. There was significant correlation between the two detection methods (r = 0.82; p < 0.0001). Sample size did not influence efficiency of the two methods in detecting presence of GM seeds.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
New density functionals representing the exchange and correlation energies (per electron) are employed, based on the electron gas model, to calculate interaction potentials of noble gas systems X2 and XY, where X (and Y) are He,Ne,Ar and Kr, and of hydrogen atomrare gas systems H-X. The exchange energy density functional is that recommended by Handler and the correlation energy density functional is a rational function involving two parameters which were optimized to reproduce the correlation energy of He atom. Application of the two parameter function to other rare gas atoms shows that it is "universal"; i. e. ,accurate for the systems considered. The potentials obtained in this work compare well with recent experimental results and are a significant improvement over those from competing statistical modelS.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Development of guanidine catalysts is explored through direct iminium chloride and amine coupling, alongside a 2-chloro-l,3-dimethyl-IH-imidazol-:-3-ium chloride (DMC) induced thiourea cyclization. Synthesized achiral catalyst N-(5Hdibenzo[ d,t][1,3]diazepin-6(7H)-ylidene)-3,5-bis(trifluoromethyl) aniline proved unsuccessful towards O-acyl migrations, however successfully catalyzed the vinylogous aldol reaction between dicbloro furanone and benzaldehyde. Incorporating chirality into the guanidine catalyst utilizing a (R)-phenylalaninol auxiliary, generating (R)-2-((5Hdibenzo[ d,t] [1,3 ]diazepin-6(7H)-ylidene ) amino )-3 -phenylpropan-l-ol, demonstrated enantioselectivity for a variety of adducts. Highest enantiomeric excess (ee) was afforded between dibromofuranone and p-chlorobenzaldehyde, affording the syn conformation in 96% ee and the anti in 54% ee, with an overall yield of30%. Attempts to increase asymmetric induction were focused on incorporation of axial chirality to the (R)phenylalaninol catalyst using binaphthyl diamine. Incorporation of (S)-binaphthyl exhibited destructive selectivity, whereas incorporation of (R)-binaphthyl demonstrated no effects on enantioselectivity. Current studies are being directed towards identifying the catalytic properties of asymmetric induction with further studies are being aimed towards increasing enantioselectivity by increasing backbone steric bulk.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.
Resumo:
We consider the problem of assigning students to schools on the basis of priorities. Students are allowed to have equal priority at a school. We characterize the efficient rules which weakly/strongly respect students’ priorities. When priority orderings are not strict, it is not possible to simply break ties in a fixed manner. All possibilities of resolving the indifferences need to be considered. Neither the deferred acceptance algorithm nor the top trading cycle algorithm successfully solve the problem of efficiently assigning the students to schools whereas a modified version of the deferred acceptance algorithm might. In this version tie breaking depends on students’ preferences.
Resumo:
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique.