892 resultados para NETTRA-E1-FIFO (Computer program)
Resumo:
Projetos de reatores nucleares foram classificados em quatro gerações (Gen) pelo Departamento de Energia dos Estados Unidos da América (DOE), quando o DOE introduziu o conceito de reatores de geração IV (Gen IV). Reatores Gen IV são um conjunto de projetos de reator nuclear, em sua maioria teóricos, atualmente sendo pesquisados. Entre os projetos Gen IV, incluem-se os projetos dos ADS (Accelerator Driven Systems), que são sistemas subcríticos estabilizados por fontes externas estacionárias de nêutrons. Estas fontes externas de nêutrons são normalmente geradas a partir da colisão de prótons com alta energia contra os núcleos de metais pesados presentes no núcleo do reator, fenômeno que é conhecido na literatura como spallation, e os prótons são acelerados num acelerador de partículas que é alimentado com parte da energia gerada pelo reator. A criticalidade de um sistema mantido por reações de fissão em cadeia depende do balanço entre a produção de nêutrons por fissão e a remoção por fuga pelos contornos e absorção de nêutrons. Um sistema está subcrítico quando a remoção por fuga e absorção ultrapassa a produção por fissão e, portanto, tende ao desligamento. Entretanto, qualquer sistema subcrítico pode ser estabilizado pela inclusão de fontes estacionárias de nêutrons em seu interior. O objetivo central deste trabalho é determinar as intensidades dessas fontes uniformes e isotrópicas de nêutrons, que se deve inserir em todas as regiões combustíveis do sistema, para que o mesmo estabilize-se gerando uma distribuição prescrita de potência elétrica. Diante do exposto, foi desenvolvido neste trabalho um aplicativo computacional em linguagem Java que estima as intensidades dessas fontes estacionárias de nêutrons, que devem ser inseridas em cada região combustível para que estabilizem o sistema subcrítico com uma dada distribuição de potência definida pelo usuário. Para atingir este objetivo, o modelo matemático adotado foi a equação unidimensional de transporte de nêutrons monoenergéticos na formulação de ordenadas discretas (SN) e o convencional método de malha fina diamond difference (DD) foi utilizado para resolver numericamente os problemas SN físicos e adjuntos. Resultados numéricos para dois problemas-modelos típicos são apresentados para ilustrar a acurácia e eficiência da metodologia proposta.
Resumo:
Population parameters of Lepturacanthus savala from the trawl catches in the north-eastern part of the Bay of Bengal, Bangladesh were investigated based on length frequency data, using complete ELEFAN computer program. The asymptotic length (Lα) and growth constant (K) were estimated to be 106.50 cm (total length) and 0.80/year respectively. Based on these growth parameters, the total mortality (Z) was estimated to be 1.89. The estimated values for natural mortality (M) and fishing mortality (F) were 1.08 and 0.81 respectively. The estimated value for the exploitation rate (E) using the length converted catch curve was 0.43. The recruitment pattern showed two peaks per year. The estimated sizes of L. savala at 25, 50 and 75% probabilities of capture were 57.49, 60.39 and 63.28 cm respectively. The estimated length weight relationship for combined sex was W=0.00093 TL(super)2.97
Resumo:
The creep effects on sequentially built bridges are analysed by the theory of thermal creep. Two types of analysis are used: time dependent and steady state. The traditional uniform creep analysis is also introduced briefly. Both simplified and parabolic normalising creep-temperature functions are used in the analysis for comparison. Numerical examples are presented, calculated by a computer program based on the theory of thermal creep and using the displacement method. It is concluded that different assumptions within thermal creep can lead to very different results when compared with uniform creep analysis. The steady-state analysis of monolithically built structures can serve as a limit to evaluate total creep effects for both monolithically and sequentially built structures. The importance of the correct selection of the normalising creep-temperature function is demonstrated.
Resumo:
The paper describes an experimental and theoretical study of the deposition of small spherical particles from a turbulent air flow in a curved duct. The objective was to investigate the interaction between the streamline curvature of the primary flow and the turbulent deposition mechanisms of diffusion and turbophoresis. The experiments were conducted with particles of uranine (used as a fluorescent tracer) produced by an aerosol generator. The particles were entrained in an air flow which passed vertically downwards through a long straight channel of rectangular cross-section leading to a 90° bend. The inside surfaces of the channel and bend were covered with tape to collect the deposited particles. Following a test run the tape was removed in sections, the uranine was dissolved in sodium hydroxide solution and the deposition rates established by measuring the uranine concentration with a luminescence spectrometer. The experimental results were compared with calculations of particle deposition in a curved duct using a computer program that solved the ensemble-averaged particle mass and momentum conservation equations. A particle density-weighted averaging procedure was used and the equations were expressed in terms of the particle convective, rather than total, velocity. This approach provided a simpler formulation of the particle turbulence correlations generated by the averaging process. The computer program was used to investigate the distance required to achieve a fully-developed particle flow in the straight entry channel as well as the variation of the deposition rate around the bend. The simulations showed good agreement with the experimental results. © 2012 Elsevier Ltd.
Resumo:
One of the most important issues facing the helicopter industry today is helicopter noise, in particular transonic rotor noise. It is the main factor limiting cruise speeds, and there is real demand for efficient and reliable prediction methods which can be used in the rotor design process. This paper considers the Ffowcs Williams-Hawkings equation applied to a permeable control surface. The surface is chosen to be as small as possible, while enclosing both the blade and any transonic flow regions. This allows the problematic quadrupole term to always be neglected, and requires only near field CFD input data. It is therefore less computationally intensive than existing prediction methods, and moreover retains the physical interpretation of the sources in terms of thickness, loading and shock-associated noise. A computer program has been developed which implements the permeable surface form of retarded time formulation. The program has been validated and subsequently used to validate an acoustic 2-D CFD code. It is fast and reliable for subsonic motion, but it is demonstrated that it cannot be used at high subsonic or supersonic speeds. A second computer program implementing a more general formulation has also been developed and is presently being validated. This general formulation can be applied at high subsonic and supersonic speeds, except under one specific condition. © 2002 by the author(s). Published by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
The discipline of Artificial Intelligence (AI) was born in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. Half of a century has passed, and AI has turned into an important field whose influence on our daily lives can hardly be overestimated. The original view of intelligence as a computer program - a set of algorithms to process symbols - has led to many useful applications now found in internet search engines, voice recognition software, cars, home appliances, and consumer electronics, but it has not yet contributed significantly to our understanding of natural forms of intelligence. Since the 1980s, AI has expanded into a broader study of the interaction between the body, brain, and environment, and how intelligence emerges from such interaction. This advent of embodiment has provided an entirely new way of thinking that goes well beyond artificial intelligence proper, to include the study of intelligent action in agents other than organisms or robots. For example, it supplies powerful metaphors for viewing corporations, groups of agents, and networked embedded devices as intelligent and adaptive systems acting in highly uncertain and unpredictable environments. In addition to giving us a novel outlook on information technology in general, this broader view of AI also offers unexpected perspectives into how to think about ourselves and the world around us. In this chapter, we briefly review the turbulent history of AI research, point to some of its current trends, and to challenges that the AI of the 21st century will have to face. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
An automated and semi-intelligent voltammetric system is described for trace metal analysis. The system consists of a voltammeter interfaced with a personal computer, a sample changer, 2 peristaltic pumps, a motor burette and a hanging mercury drop electrode. The system carries out fully automatically approximately 5 metal determinations per hour (including at least 3 repetitive scans and calibration by standard addition) at trace levels encountered in clean sea water. The computer program decides what level of standard addition to use and evaluates the data prior to switching to the next sample. Alternatively, the system can be used to carry out complexing ligand titration with copper whilst recording the labile copper concentration; in this mode up to 8 full titrations are carried out per day. Depth profiles for chromium speciation in the Mediterranean Sea and a profile for copper complexing ligand concentrations in the North Atlantic Ocean measured on board-ship with the system are presented. The chromium speciation was determined using a new method to differentiate between Cr(III) and Cr(VI) utilizing adsorption of Cr(III) on silica particles.
Resumo:
The identification of near native protein-protein complexes among a set of decoys remains highly challenging. A stategy for improving the success rate of near native detection is to enrich near native docking decoys in a small number of top ranked decoys. Recently, we found that a combination of three scoring functions (energy, conservation, and interface propensity) can predict the location of binding interface regions with reasonable accuracy. Here, these three scoring functions are modified and combined into a consensus scoring function called ENDES for enriching near native docking decoys. We found that all individual scores result in enrichment for the majority of 28 targets in ZDOCK2.3 decoy set and the 22 targets in Benchmark 2.0. Among the three scores, the interface propensity score yields the highest enrichment in both sets of protein complexes. When these scores are combined into the ENDES consensus score, a significant increase in enrichment of near-native structures is found. For example, when 2000 dock decoys are reduced to 200 decoys by ENDES, the fraction of near-native structures in docking decoys increases by a factor of about six in average. ENDES was implemented into a computer program that is available for download at http://sparks.informatics.iupui.edu.
Resumo:
Random amplified polymorphism DNA (RAPD) analysis was applied to germplasm characterization in 33 different selected Laminaria male and female gametophytes. The positional homology of the RAPD analysis using sequence characterized applied region (SCAR) method was successfully conducted. A total of 233 polymorphic loci were obtained from 18 selected primers after screening, of which 27 stable and clear bands were selected to construct a fingerprint map for discrimination of each gametophyte. Seven RAPD markers from five primers were finally determined by a computer program to construct the fingerprint map. Three specific markers closely related with gametophytes were obtained and were converted to gametophytic SCAR markers, the first SCAR marker report on Laminaria germplasm and applicable to cultivars identification. These results demonstrated the feasibility of applying RAPD markers to germplasm characterization in selected Laminaria gametophytes, and can provide a molecular basis for breeding new Laminaria strains. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Kineticist's Workbench is a computer program currently under development whose purpose is to help chemists understand, analyze, and simplify complex chemical reaction mechanisms. This paper discusses one module of the program that numerically simulates mechanisms and constructs qualitative descriptions of the simulation results. These descriptions are given in terms that are meaningful to the working chemist (e.g., steady states, stable oscillations, and so on); and the descriptions (as well as the data structures used to construct them) are accessible as input to other programs.
Resumo:
The Bifurcation Interpreter is a computer program that autonomously explores the steady-state orbits of one-parameter families of periodically- driven oscillators. To report its findings, the Interpreter generates schematic diagrams and English text descriptions similar to those appearing in the science and engineering research literature. Given a system of equations as input, the Interpreter uses symbolic algebra to automatically generate numerical procedures that simulate the system. The Interpreter incorporates knowledge about dynamical systems theory, which it uses to guide the simulations, to interpret the results, and to minimize the effects of numerical error.
Resumo:
Humans rapidly and reliably learn many kinds of regularities and generalizations. We propose a novel model of fast learning that exploits the properties of sparse representations and the constraints imposed by a plausible hardware mechanism. To demonstrate our approach we describe a computational model of acquisition in the domain of morphophonology. We encapsulate phonological information as bidirectional boolean constraint relations operating on the classical linguistic representations of speech sounds in term of distinctive features. The performance model is described as a hardware mechanism that incrementally enforces the constraints. Phonological behavior arises from the action of this mechanism. Constraints are induced from a corpus of common English nouns and verbs. The induction algorithm compiles the corpus into increasingly sophisticated constraints. The algorithm yields one-shot learning from a few examples. Our model has been implemented as a computer program. The program exhibits phonological behavior similar to that of young children. As a bonus the constraints that are acquired can be interpreted as classical linguistic rules.
Resumo:
This thesis proposes a computational model of how children may come to learn the meanings of words in their native language. The proposed model is divided into two separate components. One component produces semantic descriptions of visually observed events while the other correlates those descriptions with co-occurring descriptions of those events in natural language. The first part of this thesis describes three implementations of the correlation process whereby representations of the meanings of whole utterances can be decomposed into fragments assigned as representations of the meanings of individual words. The second part of this thesis describes an implemented computer program that recognizes the occurrence of simple spatial motion events in simulated video input.
Resumo:
This report describes MM, a computer program that can model a variety of mechanical and fluid systems. Given a system's structure and qualitative behavior, MM searches for models using an energy-based modeling framework. MM uses general facts about physical systems to relate behavioral and model properties. These facts enable a more focussed search for models than would be obtained by mere comparison of desired and predicted behaviors. When these facts do not apply, MM uses behavior-constrained qualitative simulation to verify candidate models efficiently. MM can also design experiments to distinguish among multiple candidate models.
Resumo:
This research is concerned with designing representations for analytical reasoning problems (of the sort found on the GRE and LSAT). These problems test the ability to draw logical conclusions. A computer program was developed that takes as input a straightforward predicate calculus translation of a problem, requests additional information if necessary, decides what to represent and how, designs representations capturing the constraints of the problem, and creates and executes a LISP program that uses those representations to produce a solution. Even though these problems are typically difficult for theorem provers to solve, the LISP program that uses the designed representations is very efficient.