957 resultados para Nuclear engineering inverse problems
Resumo:
A new set of manufacturing technologies has emerged in the past decades to address market requirements in a customized way and to provide support for research tasks that require prototypes. These new techniques and technologies are usually referred to as rapid prototyping and manufacturing technologies, and they allow prototypes to be produced in a wide range of materials with remarkable precision in a couple of hours. Although they have been rapidly incorporated into product development methodologies, they are still under development, and their applications in bioengineering are continuously evolving. Rapid prototyping and manufacturing technologies can be of assistance in every stage of the development process of novel biodevices, to address various problems that can arise in the devices' interactions with biological systems and the fact that the design decisions must be tested carefully. This review focuses on the main fields of application for rapid prototyping in biomedical engineering and health sciences, as well as on the most remarkable challenges and research trends.
Resumo:
In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
The problems being addressed involve the dynamic interaction of solids (structure and foundation) with a liquid (water). Various numerical procedures are reviewed and employed to solve the problem of establishing the expected response of a structure subjected to seismic excitations while duly accounting for those interactions. The methodology is applied to the analysis of dams, lock gates, and large storage tanks, incorporating in some cases a comparison with the results produced by means of simplified analytical procedures.
Resumo:
The engineer must have sufficient theoretical knowledge to be applied to solve specific problems, with the necessary capacity to simplify these approaches, and taking into account factors such as speed, simplicity, quality and economy. In Geology, its ultimate goal is the exploration of the history of the geological events through observation, deduction, reasoning and, in exceptional cases by the direct underground exploration or experimentation. Experimentation is very limited in Geology. Reproduction laboratory of certain phenomena or geological processes is difficult because both time and space become a large scale. For this reason, some Earth Sciences are in a nearly descriptive stage whereas others closest to the experimental, Geophysics and Geochemistry, have assimilated progress experienced by the physics and chemistry. Thus, Anglo-Saxon countries clearly separate Engineering Geology from Geological Engineering, i.e. Applied Geology to the Geological Engineering concepts. Although there is a big professional overlap, the first one corresponds to scientific approach, while the last one corresponds to a technological one. Applied Geology to Engineering could be defined as the Science and Applied Geology to the design, construction and performance of engineering infrastructures in and field geology discipline. There has been much discussion on the primacy of theory over practice. Today prevails the exaggeration of practice, but you get good workers and routine and mediocre teachers. This idea forgets too that teaching problem is a problem of right balance. The approach of the action lines on the European Higher Education Area (EHEA) framework provides for such balance. Applied Geology subject represents the first real contact with the physical environment with the practice profession and works. Besides, the situation of the topic in the first trace of Study Plans for many students implies the link to other subjects and topics of the career (tunnels, dams, groundwater, roads, etc). This work analyses in depth the justification of such practical trips. It shows the criteria and methods of planning and the result which manifests itself in pupils. Once practical trips experience developed, the objective work tries to know about results and changes on student’s motivation in learning perspective. This is done regardless of the outcome of their knowledge achievements assessed properly and they are not subject to such work. For this objective, it has been designed a survey about their motivation before and after trip. Survey was made by the Unidad Docente de Geología Aplicada of the Departamento de Ingeniería y Morfología del Terreno (Escuela Técnica Superior de Ingenieros de Caminos, Canales y Puertos, Universidad Politécnica de Madrid). It was completely anonymous. Its objective was to collect the opinion of the student as a key agent of learning and teaching of the subject. All the work takes place under new teaching/learning criteria approach at the European framework in Higher Education. The results are exceptionally good with 90% of student’s participation and with very high scores in a number of questions as the itineraries, teachers and visited places (range of 4.5 to 4.2 in a 5 points scale). The majority of students are very satisfied (average of 4.5 in a 5 points scale).
Resumo:
The analysis of deformation in soils is of paramount importance in geotechnical engineering. For a long time the complex behaviour of natural deposits defied the ingenuity of engineers. The time has come that, with the aid of computers, numerical methods will allow the solution of every problem if the material law can be specified with a certain accuracy. Boundary Techniques (B.E.) have recently exploded in a splendid flowering of methods and applications that compare advantegeously with other well-established procedures like the finite element method (F.E.). Its application to soil mechanics problems (Brebbia 1981) has started and will grow in the future. This paper tries to present a simple formulation to a classical problem. In fact, there is already a large amount of application of B.E. to diffusion problems (Rizzo et al, Shaw, Chang et al, Combescure et al, Wrobel et al, Roures et al, Onishi et al) and very recently the first specific application to consolidation problems has been published by Bnishi et al. Here we develop an alternative formulation to that presented in the last reference. Fundamentally the idea is to introduce a finite difference discretization in the time domain in order to use the fundamental solution of a Helmholtz type equation governing the neutral pressure distribution. Although this procedure seems to have been unappreciated in the previous technical literature it is nevertheless effective and straightforward to implement. Indeed for the special problem in study it is perfectly suited, because a step by step interaction between the elastic and flow problems is needed. It allows also the introduction of non-linear elastic properties and time dependent conditions very easily as will be shown and compares well with performances of other approaches.
Resumo:
La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.
Resumo:
Since the epoch-making "memoir" of Saint-Venant in 1855 the torsion of prismatic and cilindrical bars has reduced to a mathematical problem: the calculation of an analytical function satisfying prescribed boundary values. For over one century, till the first applications of the F.E.M. to the problem, the only possibility of study in irregularly shaped domains was the beatiful, but limitated, theory of complex function analysis, several functional approaches and the finite difference method. Nevertheless in 1963 Jaswon published an interestingpaper which was nearly lost between the splendid F. E.M. boom. The method was extended by Rizzo to more complicated problems and definitively incorporated to the scientific community background through several lecture-notes of Cruse recently published, but widely circulated during past years. The work of several researches has shown the tremendous possibilities of the method which is today a recognized alternative to the well established F .E. procedure. In fact, the first comprehensive attempt to cover the method, has been recently published in textbook form. This paper is a contribution to the implementation of a difficulty which arises if the isoparametric elements concept is applicated to plane potential problems with sharp corners in the boundary domain. In previous works, these problems was avoided using two principal approximations: equating the fluxes round the corner or establishing a binode element (in fact, truncating the corner). The first approximation distortes heavily the solution in thecorner neighbourhood, and a great amount of element is neccesary to reduce its influence. The second is better suited but the price payed is increasing the size of the system of equations to be solved. In this paper an alternative formulation, consistent with the shape function chosen in the isoparametric representation, is presented. For ease of comprehension the formulation has been limited to the linear element. Nevertheless its extension to more refined elements is straight forward. Also a direct procedure for the assembling of the equations is presented in an attempt to reduce the in-core computer requirements.
Resumo:
This paper presents the rationale to build up a Telematics Engineering curriculum. Telematics is a strongly computing oriented area; then, the authors have initially intended to apply the common requirements described in the computing curricula elaborated by the ACM/EEEE-CS Joint Curriculum Task Force. This experience has revealed some problematic aspects in the ACM/IEEE-CS proposal. From the analysis of these problems, a model to guide the selection and specially the approach of the Telematics curriculum contents is proposed. This model can be easily generalized to other strongly computing oriented curricula, whose number is growing everyday
Resumo:
The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.
Resumo:
This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.
Resumo:
The uncertainties on the isotopic composition throughout the burnup due to the nuclear data uncertainties are analysed. The different sources of uncertainties: decay data, fission yield and cross sections; are propagated individually, and their effect assessed. Two applications are studied: EFIT (an ADS-like reactor) and ESFR (Sodium Fast Reactor). The impact of the uncertainties on cross sections provided by the EAF-2010, SCALE6.1 and COMMARA-2.0 libraries are compared. These Uncertainty Quantification (UQ) studies have been carried out with a Monte Carlo sampling approach implemented in the depletion/activation code ACAB. Such implementation has been improved to overcome depletion/activation problems with variations of the neutron spectrum.
Resumo:
After the extensive research on the capabilities of the Boundary Integral Equation Method produced during the past years the versatility of its applications has been well founded. Maybe the years to come will see the in-depth analysis of several conflictive points, for example, adaptive integration, solution of the system of equations, etc. This line is clear in academic research. In this paper we comment on the incidence of the manner of imposing the boundary conditions in 3-D coupled problems. Here the effects are particularly magnified: in the first place by the simple model used (constant elements) and secondly by the process of solution, i.e. first a potential problem is solved and then the results are used as data for an elasticity problem. The errors add to both processes and small disturbances, unimportant in separated problems, can produce serious errors in the final results. The specific problem we have chosen is especially interesting. Although more general cases (i.e. transient)can be treated, here the domain integrals can be converted into boundary ones and the influence of the manner in which boundary conditions are applied will reflect the whole importance of the problem.