881 resultados para MODEL (Computer program language)
Resumo:
Metallic glasses are of interest because of their mechanical properties. They are ductile as well as brittle. This is true of Pd77.5Cu6Si16.5, a ternary glassy alloy. Actually, the most stable metallic glasses are those which are alloys of noble or transition metals A general formula is postulated as T70–80G30-20where T stands for one or several 3d transition elements, and includes the metalloid glass formers. Another general formula is A3B to A5B where B is a metalloid. A computer method utilising the MIGAP computer program of Kaufman is used to calculate the miscibility gap over a range of temperatures. The precipitation of a secondary crystalline phase is postulated around 1500K. This could produce a dispersed phase composite with interesting high temperature-strength properties.
Resumo:
A user friendly interactive computer program, CIRDIC, is developed which calculates the molar ellipticity and molar circular dichroic absorption coefficients from the CD spectrum. This, in combination with LOTUS 1-2-3 spread sheet, will give the spectra of above parameters vs wavelength. The code is implemented in MicroSoft FORTRAN 77 which runs on any IBM compatible PC under MSDOS environment.
Resumo:
This work is a case study of applying nonparametric statistical methods to corpus data. We show how to use ideas from permutation testing to answer linguistic questions related to morphological productivity and type richness. In particular, we study the use of the suffixes -ity and -ness in the 17th-century part of the Corpus of Early English Correspondence within the framework of historical sociolinguistics. Our hypothesis is that the productivity of -ity, as measured by type counts, is significantly low in letters written by women. To test such hypotheses, and to facilitate exploratory data analysis, we take the approach of computing accumulation curves for types and hapax legomena. We have developed an open source computer program which uses Monte Carlo sampling to compute the upper and lower bounds of these curves for one or more levels of statistical significance. By comparing the type accumulation from women’s letters with the bounds, we are able to confirm our hypothesis.
Resumo:
A general procedure for arriving at 3-D models of disulphiderich olypeptide systems based on the covalent cross-link constraints has been developed. The procedure, which has been coded as a computer program, RANMOD, assigns a large number of random, permitted backbone conformations to the polypeptide and identifies stereochemically acceptable structures as plausible models based on strainless disulphide bridge modelling. Disulphide bond modelling is performed using the procedure MODIP developed earlier, in connection with the choice of suitable sites where disulphide bonds could be engineered in proteins (Sowdhamini,R., Srinivasan,N., Shoichet,B., Santi,D.V., Ramakrishnan,C. and Balaram,P. (1989) Protein Engng, 3, 95-103). The method RANMOD has been tested on small disulphide loops and the structures compared against preferred backbone conformations derived from an analysis of putative disulphide subdatabase and model calculations. RANMOD has been applied to disulphiderich peptides and found to give rise to several stereochemically acceptable structures. The results obtained on the modelling of two test cases, a-conotoxin GI and endothelin I, are presented. Available NMR data suggest that such small systems exhibit conformational heterogeneity in solution. Hence, this approach for obtaining several distinct models is particularly attractive for the study of conformational excursions.
Resumo:
A hybrid technique to model two dimensional fracture problems which makes use of displacement discontinuity and direct boundary element method is presented. Direct boundary element method is used to model the finite domain of the body, while displacement discontinuity elements are utilized to represent the cracks. Thus the advantages of the component methods are effectively combined. This method has been implemented in a computer program and numerical results which show the accuracy of the present method are presented. The cases of bodies containing edge cracks as well as multiple cracks are considered. A direct method and an iterative technique are described. The present hybrid method is most suitable for modeling problems invoking crack propagation.
Resumo:
Parallel sub-word recognition (PSWR) is a new model that has been proposed for language identification (LID) which does not need elaborate phonetic labeling of the speech data in a foreign language. The new approach performs a front-end tokenization in terms of sub-word units which are designed by automatic segmentation, segment clustering and segment HMM modeling. We develop PSWR based LID in a framework similar to the parallel phone recognition (PPR) approach in the literature. This includes a front-end tokenizer and a back-end language model, for each language to be identified. Considering various combinations of the statistical evaluation scores, it is found that PSWR can perform as well as PPR, even with broad acoustic sub-word tokenization, thus making it an efficient alternative to the PPR system.
Resumo:
Electrical Impedance Tomography (EIT) is a computerized medical imaging technique which reconstructs the electrical impedance images of a domain under test from the boundary voltage-current data measured by an EIT electronic instrumentation using an image reconstruction algorithm. Being a computed tomography technique, EIT injects a constant current to the patient's body through the surface electrodes surrounding the domain to be imaged (Omega) and tries to calculate the spatial distribution of electrical conductivity or resistivity of the closed conducting domain using the potentials developed at the domain boundary (partial derivative Omega). Practical phantoms are essentially required to study, test and calibrate a medical EIT system for certifying the system before applying it on patients for diagnostic imaging. Therefore, the EIT phantoms are essentially required to generate boundary data for studying and assessing the instrumentation and inverse solvers a in EIT. For proper assessment of an inverse solver of a 2D EIT system, a perfect 2D practical phantom is required. As the practical phantoms are the assemblies of the objects with 3D geometries, the developing of a practical 2D-phantom is a great challenge and therefore, the boundary data generated from the practical phantoms with 3D geometry are found inappropriate for assessing a 2D inverse solver. Furthermore, the boundary data errors contributed by the instrumentation are also difficult to separate from the errors developed by the 3D phantoms. Hence, the errorless boundary data are found essential to assess the inverse solver in 2D EIT. In this direction, a MatLAB-based Virtual Phantom for 2D EIT (MatVP2DEIT) is developed to generate accurate boundary data for assessing the 2D-EIT inverse solvers and the image reconstruction accuracy. MatVP2DEIT is a MatLAB-based computer program which simulates a phantom in computer and generates the boundary potential data as the outputs by using the combinations of different phantom parameters as the inputs to the program. Phantom diameter, inhomogeneity geometry (shape, size and position), number of inhomogeneities, applied current magnitude, background resistivity, inhomogeneity resistivity all are set as the phantom variables which are provided as the input parameters to the MatVP2DEIT for simulating different phantom configurations. A constant current injection is simulated at the phantom boundary with different current injection protocols and boundary potential data are calculated. Boundary data sets are generated with different phantom configurations obtained with the different combinations of the phantom variables and the resistivity images are reconstructed using EIDORS. Boundary data of the virtual phantoms, containing inhomogeneities with complex geometries, are also generated for different current injection patterns using MatVP2DEIT and the resistivity imaging is studied. The effect of regularization method on the image reconstruction is also studied with the data generated by MatVP2DEIT. Resistivity images are evaluated by studying the resistivity parameters and contrast parameters estimated from the elemental resistivity profiles of the reconstructed phantom domain. Results show that the MatVP2DEIT generates accurate boundary data for different types of single or multiple objects which are efficient and accurate enough to reconstruct the resistivity images in EIDORS. The spatial resolution studies show that, the resistivity imaging conducted with the boundary data generated by MatVP2DEIT with 2048 elements, can reconstruct two circular inhomogeneities placed with a minimum distance (boundary to boundary) of 2 mm. It is also observed that, in MatVP2DEIT with 2048 elements, the boundary data generated for a phantom with a circular inhomogeneity of a diameter less than 7% of that of the phantom domain can produce resistivity images in EIDORS with a 1968 element mesh. Results also show that the MatVP2DEIT accurately generates the boundary data for neighbouring, opposite reference and trigonometric current patterns which are very suitable for resistivity reconstruction studies. MatVP2DEIT generated data are also found suitable for studying the effect of the different regularization methods on reconstruction process. Comparing the reconstructed image with an original geometry made in MatVP2DEIT, it would be easier to study the resistivity imaging procedures as well as the inverse solver performance. Using the proposed MatVP2DEIT software with modified domains, the cross sectional anatomy of a number of body parts can be simulated in PC and the impedance image reconstruction of human anatomy can be studied.
Resumo:
A comprehensive model of laser propagation in the atmosphere with a complete adaptive optics (AO) system for phase compensation is presented, and a corresponding computer program is compiled. A direct wave-front gradient control method is used to reconstruct the wave-front phase. With the long-exposure Strehl ratio as the evaluation parameter, a numerical simulation of an AO system in a stationary state with the atmospheric propagation of a laser beam was conducted. It was found that for certain conditions the phase screen that describes turbulence in the atmosphere might not be isotropic. Numerical experiments show that the computational results in imaging of lenses by means of the fast Fourier transform (FFT) method agree well with those computed by means of an integration method. However, the computer time required for the FFT method is 1 order of magnitude less than that of the integration method. Phase tailoring of the calculated phase is presented as a means to solve the problem that variance of the calculated residual phase does not correspond to the correction effectiveness of an AO system. It is found for the first time to our knowledge that for a constant delay time of an AO system, when the lateral wind speed exceeds a threshold, the compensation effectiveness of an AO system is better than that of complete phase conjugation. This finding indicates that the better compensation capability of an AO system does not mean better correction effectiveness. (C) 2000 Optical Society of America.
Resumo:
It is well known that noise and detection error can affect the performances of an adaptive optics (AO) system. Effects of noise and detection error on the phase compensation effectiveness in a dynamic AO system are investigated by means of a pure numerical simulation in this paper. A theoretical model for numerically simulating effects of noise and detection error in a static AO system and a corresponding computer program were presented in a previous article. A numerical simulation of effects of noise and detection error is combined with our previous numeral simulation of a dynamic AO system in this paper and a corresponding computer program has been compiled. Effects of detection error, readout noise and photon noise are included and investigated by a numerical simulation for finding the preferred working conditions and the best performances in a practical dynamic AO system. An approximate model is presented as well. Under many practical conditions such approximate model is a good alternative to the more accurate one. A simple algorithm which can be used for reducing the effect of noise is presented as well. When signal to noise ratio is very low, such method can be used to improve the performances of a dynamic AO system.
Resumo:
EXECUTIVE SUMMARY WORKSHOP OVERVIEW Introduction Goals and objectives of the workshop Organizing committee, participants, sponsors and venue Workshop activity NEMURO.FISH COUPLED WITH A POPULATION DYNAMICS MODEL (SAURY) Introduction One cohort case with no reproduction Two (overlapping) cohort scenario with no reproduction Two-cohort case with no reproduction and body size-dependent mortality Two-cohort case with reproduction and KL-dependent mortality Conclusions and future perspectives LAGRANGIAN MODEL OF NEMURO.FISH Tasks and members Description of model and preliminary results Future tasks COUPLING NEMURO TO HERRING BIOENERGETICS Overview Details of the NEMURO_Herring model Example simulation of NEMURO_Herring Future plans REFERENCES APPENDICES Workshop participants Workshop schedule Lagrangian model (FORTRAN program) (55 page document)
Resumo:
Chapter I
Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.
Chapter II
A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.
EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.
EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.
Chapter III
A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.
As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.
Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.
Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.
In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.
Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.
The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.
The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.
Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.
Resumo:
A bacia de Bengala, localizada a Nordeste da Índia tem uma história evolutiva extraordinária, diretamente controlada bela fragmentação do Gondwana. O início da formação desta bacia é considerada como sendo relacionada ao final do evento da quebra, datado em 126 Ma quando a Índia separou do continente Antártico e da Austrália. Desde então, a placa continental Indiana viajou do pólo sul a uma velocidade muito rápida (16 cm/a) chocando-se com o hemisfério norte e fundindo-se com a Placa Eurasiana. Durante a viagem passou por cima de um hot spot, onde hoje estão localizadas as ilhas Seicheles, resultando em um dos maiores derrames de lava basáltica do mundo, conhecido como Deccan Trap. Na região onde a bacia de Bengala foi formada, não houve aporte significativo de sedimentos siliciclásticos, resultando na deposição de uma espessa plataforma carbonática do Cretáceo tardio ao Eoceno. Após este período, devido a colisão com algumas microplacas e a amalgamação com a Placa Eurasiana, um grande volume sedimentar siliciclástico foi introduzido para a bacia, associado também ao soerguimento da cadeia de montanhas dos Himalaias. Atualmente, a Bacia de Bengala possui mais de 25 km de sedimentos, coletados neste depocentro principal. Nesta dissertação foram aplicados conceitos básicos de sismoestratigrafia na interpretação de algumas linhas regionais. As linhas sísmicas utilizadas foram adquiridas recentemente por programa sísmico especial, o qual permitiu o imageamento sísmico a mais de 35km dentro da litosfera (crosta continental e transicional). O dado permitiu interpretar eventos tectônicos, como a presença dos Seawards Dipping Reflectors (SDR) na crosta transicional, coberto por sedimentos da Bacia de Bengala. Além da interpretação sísmica amarrada a alguns poços de controle, o programa de modelagem sedimentar Beicip Franlab Dionisos, foi utilizado para modelar a história de preenchimento da bacia para um período de 5,2 Ma. O nível relativo do nível do mar e a taxa de aporte sedimentar foram os pontos chaves considerados no modelo. Através da utilização dos dados sísmicos, foi possível reconhecer dez quebras de plataformas principais, as quais foram utilizadas no modelo, amarrados aos seus respectivos tempos geológico, provenientes dos dados dos poços do Plioceno ao Holoceno. O resultado do modelo mostrou que a primeira metade modelada pode ser considerada como um sistema deposicional retrogradacional, com algum picos transgressivos. Este sistema muda drasticamente para um sistema progradacional, o qual atuou até o Holoceno. A seção modelada também mostra que no período considerado o total de volume depositado foi em torno de 2,1 x 106 km3, equivalente a 9,41 x 1014 km3/Ma.
Resumo:
Coherent ecological networks (EN) composed of core areas linked by ecological corridors are being developed worldwide with the goal of promoting landscape connectivity and biodiversity conservation. However, empirical assessment of the performance of EN designs is critical to evaluate the utility of these networks to mitigate effects of habitat loss and fragmentation. Landscape genetics provides a particularly valuable framework to address the question of functional connectivity by providing a direct means to investigate the effects of landscape structure on gene flow. The goals of this study are (1) to evaluate the landscape features that drive gene flow of an EN target species (European pine marten), and (2) evaluate the optimality of a regional EN design in providing connectivity for this species within the Basque Country (North Spain). Using partial Mantel tests in a reciprocal causal modeling framework we competed 59 alternative models, including isolation by distance and the regional EN. Our analysis indicated that the regional EN was among the most supported resistance models for the pine marten, but was not the best supported model. Gene flow of pine marten in northern Spain is facilitated by natural vegetation, and is resisted by anthropogenic landcover types and roads. Our results suggest that the regional EN design being implemented in the Basque Country will effectively facilitate gene flow of forest dwelling species at regional scale.