1000 resultados para Fortran program
Resumo:
Thermodynamic parameters of the atmosphere form part of the input to numerical forecasting models. Usually these parameters are evaluated from a thermodynamic diagram. Here, a technique is developed to evaluate these parameters quickly and accurately using a Fortran program. This technique is tested with four sets of randomly selected data and the results are in agreement with the results from the conventional method. This technique is superior to the conventional method in three respects: more accuracy, less computation time, and evaluation of additional parameters. The computation time for all the parameters on a PC AT 286 machine is II sec. This software, with appropriate modifications, can be used, for verifying various lines on a thermodynamic diagram
Resumo:
CIPWFULL is a user-friendly, stand-alone FORTRAN software program that is designed to calculate the comprehensive CIPW normative mineral composition of igneous rocks and strictly adheres to the original formulation of the CIPW protocol. This faithful adherence alleviates inaccuracies in normative mineral calculations by programs commonly used by petrologists. Additionally, several of the most important petrological and mineralogical parameters of igneous rocks are calculated by the program. Along with all the regular major oxide elements, all the significant minor elements whose contents can potentially effect the CIPW normative mineral composition are included. CIPWFULL also calculates oxidation ratios for igneous rock samples that have only one oxidation state of iron reported in the specimen analysis. It also provides an option for normalization of analyses to unity on a hydrous-free basis in order to facilitate comparison of norms among rock groups. Other capabilities of the program cater for rare situations, like the presence of cancrinite or exclusion from the norm calculation of rare rocks like carbonatite. Several mineralogical, petrological and discriminatory parameters and indexes are additionally calculated by the CIPWFULL program. The CIPWFULL program is very efficient and flexible and allows for a user-defined free-format input of all the chemical species, and it permits feeding of minor elements as parts per million or oxide percentages. Results of calculations are printed in a formatted ASCII text file and may be optionally casted into a space-delimited text files that are ready to be imported to general spreadsheet programs. CIPWFULL is DOS-based and is implemented on WINDOWS and mainframe platforms.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Errata notice inserted in the book.
Resumo:
"January 1980."
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contracttheory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet.We discuss these issues in a battery of six examples.
Resumo:
Several accidents, some involving fatalities, have occurred on U.S. Highway 30 near the Archer Daniels Midland Company (ADM) Corn Sweeteners plant in Cedar Rapids, Iowa. A contributing factor to many of these accidents has been the large amounts of water (vapor and liquid) emitted from multiple sources at ADM's facility located along the south side of the highway. Weather and road closure data acquired from IDOT have been used to develop a database of meteorological conditions preceding and accompanying closure of Highway 30 in Cedar Rapids. An expert system and a FORTRAN program were developed as aids in decision making with regard to closure of Highway 30 near the plant. The computer programs were used for testing, evaluation, and final deployment. Reports indicate the decision tools have been successfully implemented and were judged to be helpful in forecasting road closures and in reducing costs and personnel time in monitoring the roadway.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This paper was developed as part of a broader research program on the political economy of exchange rate policies in Latin America and the Caribbean. We are grateful for helpful comments and suggestions from Jeff Frieden, Ernesto Stein, Jorge Streb, Marcelo Neri and seminar participants at Getulio Vargas Foundation, PUC-Rio, IDB workshop on The Political Economy of Exchange Rate Policies in Latin America and the Caribbean, and LACEA meeting in Buenos Aires. We thank René Garcia for providing us with a Fortran program for estimating the Markov Switching Model, Ilan Goldfajn for sending us updated estimates of the real exchange rate series of Goldfajn and Valdés (1996), Altamir Lopes and Ricardo Markwald for kindly furnishing data on Brazilian external accounts, and Carla Bernardes, Gabriela Domingues, Juliana Pessoa de Araújo, and, specially, Marcelo Pinheiro for excellent research assistant. Both authors thank CNPq for a research fellowship.
Resumo:
We consider a procedure for obtaining a compact fourth order method to the steady 2D Navier-Stokes equations in the streamfunction formulation using the computer algebra system Maple. The resulting code is short and from it we obtain the Fortran program for the method. To test the procedure we have solved many cavity-type problems which include one with an analytical solution and the results are compared with results obtained by second order central differences to moderate Reynolds numbers. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Pós-graduação em Física - IGCE
Resumo:
A medição de parâmetros físicos de reservatórios se constitui de grande importância para a detecção de hidrocarbonetos. A obtenção destes parâmetros é realizado através de análise de amplitude com a determinação dos coeficientes de reflexão. Para isto, faz-se necessário a aplicação de técnicas especiais de processamento capazes de corrigir efeitos de divergência esférica. Um problema pode ser estabelecido através da seguinte questão: Qual o efeito relativamente mais importante como responsável pela atenuação de amplitudes, o espalhamento geométrico ou a perda por transmissividade? A justificativa desta pergunta reside em que a correção dinâmica teórica aplicada a dados reais visa exclusivamente o espalhamento geométrico. No entanto, a análise física do problema por diferentes direções põe a resposta em condições de dúvida, o que é interessante e contraditório com a prática. Uma resposta embasada mais fisicamente pode dar melhor subsídio a outros trabalhos em andamento. O presente trabalho visa o cálculo da divergência esférica segundo a teoria Newman-Gutenberg e corrigir sismogramas sintéticos calculados pelo método da refletividade. O modelo-teste é crostal para que se possa ter eventos de refração crítica além das reflexões e para, com isto, melhor orientar quanto à janela de aplicação da correção de divergência esférica o que resulta em obter o então denominado “verdadeiras amplitudes”. O meio simulado é formado por camadas plano-horizontais, homogêneas e isotrópicas. O método da refletividade é uma forma de solução da equação de onda para o referido modelo, o que torna possível um entendimento do problema em estudo. Para se chegar aos resultados obtidos foram calculados sismogramas sintéticos através do programa P-SV-SH desenvolvido por Sandmeier (1998), e curvas do espalhamento geométrico em função do tempo para o modelo estudado como descrito por Newman (1973). Demonstramos como uma das conclusões que a partir dos dados do modelo (velocidades, espessuras, densidades e profundidades) uma equação para a correção de espalhamento geométrico visando às “verdadeiras amplitudes” não é de fácil obtenção. O objetivo maior então deveria ser obter um painel da função de divergência esférica para corrigir as verdadeiras amplitudes.