946 resultados para Emulators (Computer programs)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common point of reference is needed to describe the three-dimensional arrangements of bases and base-pairs in nucleic acid structures. The different standards used in computer programs created for this purpose give rise to con¯icting interpretations of the same structure.1 For example, parts of a structure that appear ``normal'' according to one computational scheme may be highly unusual according to another and vice versa. It is thus dif®cult to carry out comprehensive comparisons of nucleic acid structures and to pinpoint unique conformational features in individual structures

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.

The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.

The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.

Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.

Source listings for all computer programs are given in the appendix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interactions of N2, formic acid and acetone on the Ru(001) surface are studied using thermal desorption mass spectrometry (TDMS), electron energy loss spectroscopy (EELS), and computer modeling.

Low energy electron diffraction (LEED), EELS and TDMS were used to study chemisorption of N2 on Ru(001). Adsorption at 75 K produces two desorption states. Adsorption at 95 K fills only the higher energy desorption state and produces a (√3 x √3)R30° LEED pattern. EEL spectra indicate both desorption states are populated by N2 molecules bonded "on-top" of Ru atoms.

Monte Carlo simulation results are presented on Ru(001) using a kinetic lattice gas model with precursor mediated adsorption, desorption and migration. The model gives good agreement with experimental data. The island growth rate was computed using the same model and is well fit by R(t)m - R(t0)m = At, with m approximately 8. The island size was determined from the width of the superlattice diffraction feature.

The techniques, algorithms and computer programs used for simulations are documented. Coordinate schemes for indexing sites on a 2-D hexagonal lattice, programs for simulation of adsorption and desorption, techniques for analysis of ordering, and computer graphics routines are discussed.

The adsorption of formic acid on Ru(001) has been studied by EELS and TDMS. Large exposures produce a molecular multilayer species. A monodentate formate, bidentate formate, and a hydroxyl species are stable intermediates in formic acid decomposition. The monodentate formate species is converted to the bidentate species by heating. Formic acid decomposition products are CO2, CO, H2, H2O and oxygen adatoms. The ratio of desorbed CO with respect to CO2 increases both with slower heating rates and with lower coverages.

The existence of two different forms of adsorbed acetone, side-on, bonded through the oxygen and acyl carbon, and end-on, bonded through the oxygen, have been verified by EELS. On Pt(111), only the end-on species is observed. On dean Ru(001) and p(2 x 2)O precovered Ru(001), both forms coexist. The side-on species is dominant on clean Ru(001), while O stabilizes the end-on form. The end-on form desorbs molecularly. Bonding geometry stability is explained by surface Lewis acidity and by comparison to organometallic coordination complexes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design of a two-stream wind tunnel was undertaken to allow the simulation and study of certain features of the flow field around the blades of high-speed axial-flow turbomachineries. The mixing of the two parallel streams with designed Mach numbers respectively equal to 1.4 and 0.7 will simulate the transonic Mach number distribution generally obtained along the tips of the first stage blades in large bypass-fan engines.

The GALCIT hypersonic compressor plant will be used as an air supply for the wind tunnel, and consequently the calculations contained in the first chapter are derived from the characteristics and the performance of this plant.

The transonic part of the nozzle is computed by using a method developed by K. O. Friedrichs. This method consists essentially of expanding the coordinates and the characteristics of the flow in power series. The development begins with prescribing, more or less arbitrarily, a Mach number distribution along the centerline of the nozzle. This method has been programmed for an IBM 360 computer to define the wall contour of the nozzle.

A further computation is carried out to correct the contour for boundary layer buildup. This boundary layer analysis included geometry, pressure gradient, and Mach number effects. The subsonic nozzle is calculated {including boundary layer buildup) by using the same computer programs. Finally, the mixing zone downstream of the splitter plate was investigated to prescribe the wall contour correction necessary to ensure a constant-pressure test section.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Steady-state procedures, of their very nature, cannot deal with dynamic situations. Statistical models require extensive calibration, and predictions often have to be made for environmental conditions which are often outside the original calibration conditions. In addition, the calibration requirement makes them difficult to transfer to other lakes. To date, no computer programs have been developed which will successfully predict changes in species of algae. The obvious solution to these limitations is to apply our limnological knowledge to the problem and develop functional models, so reducing the requirement for such rigorous calibration. Reynolds has proposed a model, based on fundamental principles of algal response to environmental events, which has successfully recreated the maximum observed biomass, the timing of events and a fair simulation of the species succession in several lakes. A forerunner of this model was developed jointly with Welsh Water under contract to Messrs. Wallace Evans and Partners, for use in the Cardiff Bay Barrage study. In this paper the authors test a much developed form of this original model against a more complex data-set and, using a simple example, show how it can be applied as an aid in the choice of management strategy for the reduction of problems caused by eutrophication. Some further developments of the model are indicated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer programs were developed to calculate the parameters commonly used in fisheries statistics: catch per unit effort, catch by species, size distribution, etc. These parameters were computed for collective fishing, purse seine and beach seine; important aspects of the artisanal fisheries in the Ebrié Lagoon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical models for heated water outfalls were developed for three flow regions. Near the source, the subsurface discharge into a stratified ambient water issuing from a row of buoyant jets was solved with the jet interference included in the analysis. The analysis of the flow zone close to and at intermediate distances from a surface buoyant jet was developed for the two-dimensional and axisymmetric cases. Far away from the source, a passive dispersion model was solved for a two dimensional situation taking into consideration the effects of shear current and vertical changes in diffusivity. A significant result from the surface buoyant jet analysis is the ability to predict the onset and location of an internal hydraulic jump. Prediction can be made simply from the knowledge of the source Froude number and a dimensionless surface exchange coefficient. Parametric computer programs of the above models are also developed as a part of this study. This report was submitted in fulfillment of Contract No. 14-12-570 under the sponsorship of the Federal Water Quality Administration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação investiga a aplicação dos algoritmos evolucionários inspirados na computação quântica na síntese de circuitos sequenciais. Os sistemas digitais sequenciais representam uma classe de circuitos que é capaz de executar operações em uma determinada sequência. Nos circuitos sequenciais, os valores dos sinais de saída dependem não só dos valores dos sinais de entrada como também do estado atual do sistema. Os requisitos cada vez mais exigentes quanto à funcionalidade e ao desempenho dos sistemas digitais exigem projetos cada vez mais eficientes. O projeto destes circuitos, quando executado de forma manual, se tornou demorado e, com isso, a importância das ferramentas para a síntese automática de circuitos cresceu rapidamente. Estas ferramentas conhecidas como ECAD (Electronic Computer-Aided Design) são programas de computador normalmente baseados em heurísticas. Recentemente, os algoritmos evolucionários também começaram a ser utilizados como base para as ferramentas ECAD. Estas aplicações são referenciadas na literatura como eletrônica evolucionária. Os algoritmos mais comumente utilizados na eletrônica evolucionária são os algoritmos genéticos e a programação genética. Este trabalho apresenta um estudo da aplicação dos algoritmos evolucionários inspirados na computação quântica como uma ferramenta para a síntese automática de circuitos sequenciais. Esta classe de algoritmos utiliza os princípios da computação quântica para melhorar o desempenho dos algoritmos evolucionários. Tradicionalmente, o projeto dos circuitos sequenciais é dividido em cinco etapas principais: (i) Especificação da máquina de estados; (ii) Redução de estados; (iii) Atribuição de estados; (iv) Síntese da lógica de controle e (v) Implementação da máquina de estados. O Algoritmo Evolucionário Inspirado na Computação Quântica (AEICQ) proposto neste trabalho é utilizado na etapa de atribuição de estados. A escolha de uma atribuição de estados ótima é tratada na literatura como um problema ainda sem solução. A atribuição de estados escolhida para uma determinada máquina de estados tem um impacto direto na complexidade da sua lógica de controle. Os resultados mostram que as atribuições de estados obtidas pelo AEICQ de fato conduzem à implementação de circuitos de menor complexidade quando comparados com os circuitos gerados a partir de atribuições obtidas por outros métodos. O AEICQ e utilizado também na etapa de síntese da lógica de controle das máquinas de estados. Os circuitos evoluídos pelo AEICQ são otimizados segundo a área ocupada e o atraso de propagação. Estes circuitos são compatíveis com os circuitos obtidos por outros métodos e em alguns casos até mesmo superior em termos de área e de desempenho, sugerindo que existe um potencial de aplicação desta classe de algoritmos no projeto de circuitos eletrônicos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Na década de 80, o surgimento de programas de computadores mais amigáveis para usuários e produtores de informação e a evolução tecnológica fizeram com que as instituições, públicas e privadas, se aperfeiçoassem em estudos sobre sistemas de produção cartográfica apoiados por computador, visando a implementação de Sistemas de Informação Geográfica (SIG). A pouca simultaneidade de forças entre órgãos interessados, resultou em uma grande quantidade de arquivos digitais com a necessidade de padronização. Em 2007, a Comissão Nacional de Cartografia (CONCAR) homologou a Estrutura de Dados Geoespaciais Vetoriais (EDGV) a fim de minimizar o problema da falta de padronização de bases cartográficas. A presente dissertação tem como foco elaborar uma metodologia de trabalho para o processo de conversão de bases cartográficas digitais existentes no padrão da Mapoteca Topográfica Digital (MTD), do Instituto Brasileiro de Geografia e Estatística (IBGE), para o padrão da EDGV, bem como suas potencialidades e limitações para integração e padronização de bases cartográficas digitais. Será feita uma aplicação da metodologia utilizando a carta topográfica de Saquarema, na escala de 1:50.000, vetorizada na Coordenação de Cartografia (CCAR) do IBGE e disponível na Internet. Como a EDGV foi elaborada segundo técnicas de modelagem orientada a objetos, foi necessário um mapeamento para banco de dados relacional, já que este ainda é utilizado pela maioria dos usuários e produtores de informação geográfica. Um dos objetivos específicos é elaborar um esquema de banco de dados, ou seja, um banco de dados vazio contendo todas as classes de objetos, atributos e seus respectivos domínios existentes na EDGV para que possa ser utilizado no processo de produção cartográfica do IBGE. Este esquema conterá todas as descrições dos objetos e de seus respectivos atributos, além de já permitir que o usuário selecione o domínio de um determinado atributo em uma lista pré definida, evitando que ocorra erro no preenchimento dados. Esta metodologia de trabalho será de grande importância para o processo de conversão das bases cartográficas existentes no IBGE e, com isso, gerar e disponibilizar bases cartográficas no padrão da EDGV.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Costa Peruana apresenta alta atividade sísmica, tornando imprescindível a execução de análises de estabilidade que considerem os eventos sísmicos. Com o desenvolvimento de novas ferramentas numéricas, a análise dinâmica está se tornando cada vez mais importante e usual na fase de projeto, não se justificando mais a execução de análises estáticas em locais tão vulneráveis a sismos. A presente dissertação tem como objetivo realizar a análise da estabilidade dos Penhascos de Lima, Peru, considerando quatro taludes distintos, em uma região bastante afetada por abalos sísmicos. As análises foram executadas pelos métodos de equilíbrio limite, elementos finitos e pseudo-estático, buscando-se uma comparação entre os diferentes métodos. O trabalho apresenta uma descrição completa dos taludes em estudo, uma avaliação da condição sismológica da região e finalmente a estabilidade dos Penhascos de Lima, fazendo uso dos programas computacionais Slide (método do equilíbrio limite) e Plaxis (método dos elementos finitos) em 2D. Os resultados mostraram que os três métodos adotados forneceram fatores de segurança compatíveis, principalmente quando se considera perfis menos estratificados. Para perfis homogêneos, as diferenças obtidas foram da ordem de 0,5 a 1,0 %. As análises ressaltaram a importância de considerar a condição dinâmica, e mostraram-se bastante sensíveis aos valores de carga sísmica adotado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

设计了一种基于机器视觉导航和杂草识别的除草机器人模型,该机器人能沿作物行间自主行走并能准确地识别和"清除"杂草。设计了除草机器人的机械臂除草执行系统,求取了机械臂运动学逆解,用VC++开发了控制程序。试验显示,图像处理算法所需时间少,能够适应户外自然光线在一定范围的变化,机械臂能够平稳动作并精确定位杂草目标。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Xinli mine area of Sanshandao mine is adjacent to the Bohai Sea and its main exploitable ore deposit occurs in the undersea rock mass. The mine is the biggest undersea gold mine of China after production. The mine area faces a latent danger of water bursting, even sudden seawater inrush. There is no mature experience in undersea mining in China so far. The vein ore deposit is located in the lower wall of a fault; its possible groundwater sources mainly include bittern, Quaternary pore water and modern seawater. To ensure the safety of undersea mining, to survey the flooding conditions of the ore deposit using proper measures and study the potential seawater inrush pattern are the key technical problems. With the Xinli mine area as a case study, the engineering geological conditions of the Xinli mine area are surveyed in situ, the regional structural pattern and rock mass framework characteristics are found out, the distribution of the structural planes are modeled by a Monte Carlo method and the connectivity coefficients of rock mass structural planes are calculated. The regional hydro-geological conditions are analyzed and the in-situ hydro-geological investigation and sampling are performed in detail, the hydrochemistry and isotopes testing and groundwater dynamic monitoring are conducted, the recharge, runoff, discharge conditions are specified and the sources of flooding are distinguished. Some indices are selected from the testing results to calculate the proportion of each source in some water discharge points and in the whole water discharge of the Xinli mine area. The temporal and spatial variations of each water source of the whole ore deposit flooding are analyzed. According to the special project conditions in the Xinli mine area, the permeability coefficient tensors of the rock mass in Xinli mine area are calculated based on a fracture geometry measurement method, in terms of the connectivity and a few hydraulic testing results, a modified synthetic permeability coefficient are calculated. The hydro-geological conceptual and mathematical model are established,the water yield of mine is predicted using Visual Modflow code. The spreading law of surrounding rock mass deformation and secondary stress are studied by numerical analysis; the intrinsic mechanism of the faults slip caused by the excavation of ore deposit is analyzed. The results show that the development of surrounding rock mass deformation and secondary stress of vein ore deposit in the lower wall of a fault, is different from that in a thick-big ore deposit. The secondary stress caused by the excavation of vein ore deposit in the lower wall of a fault, is mainly distributed in the upper wall of the fault, one surface subsidence center will occur. The influences of fault on the rock mass movement, secondary stress and hydro-geological structures are analyzed; the secondary stress is blocked by the fault and the tensile stress concentration occurs in the rock mass near the fault, the original water blocking structure is destructed and the permeable structure is reconstructed, the primary structural planes begin to expand and newborn fissures occur, so the permeability of the original permeable structure is greatly enhanced, so the water bursting will probably occur. Based on this knowledge, the possible water inrush pattern and position of the Xinli mine area are predicted. Some computer programs are developed using object-oriented design method under the development platform Visual Studio.Net. These programs include a Monte Carlo simulation procedure, a joint diagrammatizing procedure, a structural planes connectivity coefficient calculating procedure, a permeability tensor calculating procedure, a water chemical formula edit and water source fixture conditions calculating procedure. A new computer mapping algorithm of joint iso-density diagram is raised. Based on the powerful spatial data management and icon functions of Geographic Information System, the pit water discharge dynamic monitoring data management information systems are established with ArcView.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.