871 resultados para Anisotropic Analytical Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traduction de Wylie, rédigée par Li Shan lan ; préfaces Chinoises des deux traducteurs (1859) ; préface anglaise, écrite à Shang hai par A. Wylie (juillet 1859). Liste de termes techniques en anglais et en Chinois. Gravé à la maison Mo hai (1859).18 livres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this meta-analytic investigation was to review the empirical evidence specific to the effect of physical activity context on social physique anxiety (SP A). English language studies were located from computer and manual literature searches. A total of 146 initial studies were coded. Studies included in the meta-analysis presented at least one empirical effect for SPA between physical activity participants (i.e., athletes or exercisers) and non-physical activity participants. The final sample included thirteen studies, yielding 14 effect sizes, with a total sample size of 2846. Studies were coded for mean SPA between physical activity participants and non-physical activity participants. Moderator variables related to demographic and study characteristics were also coded. Using Hunter and Schmidt's (2004) protocol, statistical artifacts were corrected. Results indicate that, practically speaking, those who were physically active reported lower levels of SPA than the comparison group (dcorr = -.12; SDeorr.-=-;22). Consideration of the magnitude of the ES, the SDeorr, and confidence interval suggests that this effect is not statistically significant. While most moderator analyses reiterated this trend, some differences were worth noting. Previous research has identified SPA to be especially salient for females compared to males, however, in the current investigation, the magnitude of the ES' s comparing physical activity participants to the comparison group was similar (deorr = -.24 for females and deorr = -.23 for males). Also, the type of physical activity was investigated, and results showed that athletes reported lower levels of SP A than the comparison group (deorr = -.19, SDeorr = .08), whereas exercisers reported higher levels of SPA than the comparison group (deorr = .13, SDeorr = .22). Results demonstrate support for the dispositional nature of SP A. Consideration of practical significance suggests that those who are involved in physical activity may experience slightly lower levels of SPA than those not reporting physical activity participation. Results potentially offer support for the bi-directionality of the relationship between physical activity and SP A; however, a causality may not be inferred. More information about the type of physical activity (i.e., frequency/nature of exercise behaviour, sport classificationllevel of athletes) may help clarify the role of physical activity contexts on SPA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An analytical model for bacterial accumulation in a discrete fractllre has been developed. The transport and accumlllation processes incorporate into the model include advection, dispersion, rate-limited adsorption, rate-limited desorption, irreversible adsorption, attachment, detachment, growth and first order decay botl1 in sorbed and aqueous phases. An analytical solution in Laplace space is derived and nlln1erically inverted. The model is implemented in the code BIOFRAC vvhich is written in Fortran 99. The model is derived for two phases, Phase I, where adsorption-desorption are dominant, and Phase II, where attachment-detachment are dominant. Phase I ends yvhen enollgh bacteria to fully cover the substratllm have accllillulated. The model for Phase I vvas verified by comparing to the Ogata-Banks solution and the model for Phase II was verified by comparing to a nonHomogenous version of the Ogata-Banks solution. After verification, a sensitiv"ity analysis on the inpllt parameters was performed. The sensitivity analysis was condllcted by varying one inpllt parameter vvhile all others were fixed and observing the impact on the shape of the clirve describing bacterial concentration verSllS time. Increasing fracture apertllre allovvs more transport and thus more accllffilliation, "Vvhich diminishes the dllration of Phase I. The larger the bacteria size, the faster the sllbstratum will be covered. Increasing adsorption rate, was observed to increase the dllration of Phase I. Contrary to the aSSllmption ofllniform biofilm thickness, the accllffilliation starts frOll1 the inlet, and the bacterial concentration in aqlleous phase moving towards the olitiet declines, sloyving the accumulation at the outlet. Increasing the desorption rate, redllces the dliration of Phase I, speeding IIp the accllmlilation. It was also observed that Phase II is of longer duration than Phase I. Increasing the attachment rate lengthens the accliffililation period. High rates of detachment speeds up the transport. The grovvth and decay rates have no significant effect on transport, althollgh increases the concentrations in both aqueous and sorbed phases are observed. Irreversible adsorption can stop accllillulation completely if the vallIes are high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A high performance liquid chromatographic method employing two columns connected in series and separated~y·a.switching valve has been developed for the analysis of the insecticide/ nematicide oxamyl (methyl-N' ,N'-dimethyl-N-[(methylcarbamoyl) oxy]-l-thiooxarnimidate) and two of its metabolites. A variation of this method involving two reverse phase columns was employed to monitor the persistence and translocation of oxamyl in treated peach seedlings. It was possible to simultaneously analyse for oxamyl and its corresponding oxime (methyl-N',N'-dimethyl-N-hydroxy-l-thiooxamimidate}, a major metabolite of oxamyl in plants, without prior cleanup of the samples. The method allowed detection of 0.058 pg oxamyl and 0.035 p.g oxime. On treated peach leaves oxamyl was found to dissipate rapidly during the first two-week period, followed by a period of slow decomposition. Movement of oxamyl or its oxime did not occur in detectable quantities to untreated leaves or to the root or soil. A second variation of the method which employed a size exclusion column as·the first column and a reverse phase column as the second was used to monitor the degradation of oxamyl in treated, planted corn seeds and was suitable for simultaneous analysis of oxamyl, its oxime and dimethylcyanoformamide (DMCF), a metabolite of oxamyl. The method allowed detection of 0.02 pg oxamyl, 0.02 p.g oxime and 0.005 pg DMCF. Oxamyl was found to persist for a period of 5 - 6 weeks, which is long enough to permit oxamyl seedtreatment to be considered as a potential means of protecting young corn plants from nematode attack. Decomposition was found to be more rapid in unsterilized soil than in sterililized soil. DMCF was found to have a nematostatic effect at high concentrations ( 2,OOOpprn), but at lower concentrations no effect on nematode mobility was observed. Oxamyl, on the other hand, was found to reduce the mobility of nematodes at concentrations down to 4 ppm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La méthode de projection et l'approche variationnelle de Sasaki sont deux techniques permettant d'obtenir un champ vectoriel à divergence nulle à partir d'un champ initial quelconque. Pour une vitesse d'un vent en haute altitude, un champ de vitesse sur une grille décalée est généré au-dessus d'une topographie donnée par une fonction analytique. L'approche cartésienne nommée Embedded Boundary Method est utilisée pour résoudre une équation de Poisson découlant de la projection sur un domaine irrégulier avec des conditions aux limites mixtes. La solution obtenue permet de corriger le champ initial afin d'obtenir un champ respectant la loi de conservation de la masse et prenant également en compte les effets dûs à la géométrie du terrain. Le champ de vitesse ainsi généré permettra de propager un feu de forêt sur la topographie à l'aide de la méthode iso-niveaux. L'algorithme est décrit pour le cas en deux et trois dimensions et des tests de convergence sont effectués.