947 resultados para made in


Relevância:

70.00% 70.00%

Publicador:

Resumo:

From 1950 through 1900 studies on the glacial geology of northern Greenland have been made in cooperation with the U.S. Air Force Cambridge Research Laboratories. As a result of these studies four distinct phases of the latest glaciation have been recognized. The last glaciation extended over most of the land and removed traces of previous anes. Retreat of the ice mass began some time previous to 6000 years ago. This was followed by a rtse in sea level which deposited clay-silt succeeded by karne gravels around stagnant ice lobes in the large valleys. Marine terraces, up to 129 meters above present sea level, developed as readjustment occurred in the land free of ice. About 3700 years ago an advance of glaciers down major fjords took place followed by retreat to approximately the present position of the ice. Till in Peary Land, north of Frederick E. Hyde Fjord, contains only locally derived matertals indicating that the central Greenland ice cap did not cover the area.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

About 100 parallel determinations of hydrogen sulfide by the volumetric and photometric methods were made in the layer of coexistence of oxygen with hydrogen sulfide (C layer). Thiosulfates were determined simultaneously. Regardless of locations of the stations, determinations by two methods coincided for the entire range of depths of occurrence of the C layer upper boundary. Within the C layer hydrogen sulfide readings obtained by these two independent methods agreed; thiosulfates were not found by direct measurements. Difference in the readings appears at the lower boundary of the C layer and below it, accompanied by appearance of thiosulfates. It is therefore concluded that it is correct to determine the upper boundary of the C layer by the iodometric method and to use concentration of hydrogen sulfide obtained by this method in the C layer to calculate rate of chemical oxidation of hydrogen sulfide in quasistationary processes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Norwegian spring spawning (NSS) herring is an ecologically important fish stock in the Norwegian Sea, and with a catch volume exceeding one million tons a year it is also economically important and a valuable food source. In order to provide a baseline of the levels of contaminants in this fish stock, the levels of organohalogen compounds were determined in 800 individual herring sampled at 29 positions in the Norwegian Sea and off the coast of Norway. Due to seasonal migration, the herring were sampled where they were located during the different seasons. Concentrations of dioxins and dioxin-like PCBs, non-dioxin-like PCBs (PCB7) and PBDEs were determined in fillet samples of individual herring, and found to be relatively low, with means (min-max) of 0.77 (0.24-3.5) ngTEQ/kg wet weight (ww), 5.0 (1.4-24) µg/kg ww and 0.47 (0.091-3.1) µg/kg ww, respectively. The concentrations varied throughout the year due to the feeding- and spawning cycle: Starved, pre-spawning herring caught off the Norwegian coast in January-February had the highest levels and those caught in the Norwegian Sea in April-June, after further starvation and spawning, had the lowest levels. These results show that the concentrations of organohalogen compounds in NSS herring are relatively low and closely tied to their physiological condition, and that in the future regular monitoring of NSS herring should be made in the spawning areas off the Norwegian coast in late winter.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Determinations of dissolved organic carbon and salinity were made in a region of the subtropical convergence of southern tropical waters of the Indian Ocean. It is shown that nature of vertical distribution of dissolved organic carbon together with salinity reflects water subsiding.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The dataset is based on samples collected in the autumn of 2001 in the Western Black Sea in front of Bulgaria coast. The whole dataset is composed of 42 samples (from 19 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Samples were collected in the layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The "CoMSBlack92" dataset is based on samples collected in the summer of 1992 along the Bulgarian coast including coastal and open sea areas. The whole dataset is composed of 79 samples (28 stations) with data of zooplankton species composition, abundance and biomass. Sampling for zooplankton was performed from bottom up to the surface at standard depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 ?m. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Sampling volume was estimated by multiplying the mouth area with the wire length. The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972 ). The biomass was estimated as wet weight by Petipa, 1959 (based on species specific wet weight). Wet weight values were transformed to dry weight using the equation DW=0.16*WW as suggested by Vinogradov & Shushkina, 1987. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. The biomass was estimated as wet weight by Petipa, 1959 ussing standard average weight of each species in mg/m**3.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The "Hydroblack91" dataset is based on samples collected in the summer of 1991 and covers part of North-Western in front of Romanian coast and Western Black Sea (Bulgarian coasts) (between 43°30' - 42°10' N latitude and 28°40'- 31°45' E longitude). Mesozooplankton sampling was undertaken at 20 stations. The whole dataset is composed of 72 samples with data of zooplankton species composition, abundance and biomass. Samples were collected in discrete layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected materia was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). The biomass was estimated as wet weight by Petipa, 1959 (based on species specific wet weight). Wet weight values were transformed to dry weight using the equation DW=0.16*WW as suggested by Vinogradov & Shushkina, 1987. Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). The biomass was estimated as wet weight by Petipa, 1959 ussing standard average weight of each species in mg/m3. WW were converted to DW by equation DW=0.16*WW (Vinogradov ME, Sushkina EA, 1987).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The present dataset includes results of analysis of 227 zooplankton samples taken in and off the Sevastopol Bay in the Black Sea in 1976, 1979-1980, 1989-1990, 1995-1996 and 2002-2003. Exact coordinates for stations 1, 4, 5 and 6 are unknown and were calculated using Google-earth program. Data on Ctenophora Mnemiopsis leidyi and Beroe ovata are not included. Juday net: Vertical tows of a Juday net, with mouth area 0.1 m**2, mesh size 150µm. Tows were performed at layers. Towing speed: about 0.5 m/s. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. The collected material was analysed using the method of portions (Yashnov, 1939). Samples were brought to volume of 50 - 100 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 1 ml of sample was taken by calibrated Stempel-pipette. This operation was produced twice. If divergence between two examined subsamples was more than 30% one more subsample was examined. Large (> 1 mm body length) and not abundant species were calculated in 1/2, 1/4, 1/8, 1/16 or 1/32 part of sample. Counting and measuring of organisms were made in the Bogorov chamber under the stereomicroscope to the lowest taxon possible. Number of organisms per sample was calculated as simple average of two subsamples meanings multiplied on subsample volume. Total abundance of mesozooplankton was calculated as sum of taxon-specific abundances and total abundance of Copepods was calculated as sum of copepods taxon-specific abundances.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The formation of a subsurface anticyclonic eddy in the Peru-Chile Undercurrent (PCUC) in January and February 2013 is investigated using a multi-platform four-dimensional observational approach. Research vessel, multiple glider and mooring-based measurements were conducted in the Peruvian upwelling regime near 12°30'S. The dataset consists of more than 10000 glider profiles and repeated vessel-based hydrography and velocity transects. It allows a detailed description of the eddy formation and its impact on the near-coastal salinity, oxygen and nutrient distributions. In early January, a strong PCUC with maximum poleward velocities of ca. 0.25 m/s at 100 to 200 m depth was observed. Starting on January 20 a subsurface anticyclonic eddy developed in the PCUC downstream of a topographic bend, suggesting flow separation as the eddy formation mechanism. The eddy core waters exhibited oxygen concentrations less than 1mol/kg, an elevated nitrogen-deficit of ca. 17µmol/l and potential vorticity close to zero, which seemed to originate from the bottom boundary layer of the continental slope. The eddy-induced across-shelf velocities resulted in an elevated exchange of water masses between the upper continental slope and the open ocean. Small scale salinity and oxygen structures were formed by along-isopycnal stirring and indications of eddy-driven oxygen ventilation of the upper oxygen minimum zone were observed. It is concluded that mesoscale stirring of solutes and the offshore transport of eddy core properties could provide an important coastal open-ocean exchange mechanism with potentially large implications for nutrient budgets and biogeochemical cycling in the oxygen minimum zone off Peru.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper includes an examination of the sustainability of recent high growth in the poultry meat industry in Brazil. In addition, an assessment is made of the impact of increased production of poultry meat products on the development of local industries. Comparative studies of leading companies in the United States, Mexico, and Brazil reveal competitive advantages in the low costs of feedstuff and labor as well as disadvantages in the scale of business and management efficiency in the Brazilian poultry sector. Increases in domestic and foreign demand for Brazilian poultry meat have promoted development of the Brazilian poultry sector in local areas. The formation of industrial clusters is observed using regional data related to the location of slaughterhouses and the number of chickens farmed. Statistical analyses support observations made in this paper.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

On January 1, 2005, the controlled trade regime on textiles and clothing which was based on the Multi-Fiber Arrangement (MFA) made in 1974 was abolished. This institutional change wrought great impacts on the world market for textiles and clothing.This paper reviews the impacts of the changes on the main markets and examines the prospects for the markets and the source countries. The main conclusions are as follows: (1) after the renewal of quantitative restrictions on Chinese garment exports were agreed with the US and the EU, the post-MFA surge in Chinese garment exports was significantly attenuated; (2) instead, the growth in garment exports from other Asian low-income countries to the two markets was revived in 2006; (3) the Japanese market has been kept almost intact from the impact of the regime shift; (4) some developing countries, such as Bangladesh and Cambodia, not only survived the liberalization but also have steadily expanded their garment exports throughout the transition; and (5) an indicative fact is that the profitability of the garment industry in Bangladesh and Cambodia was high on average according to surveys conducted in 2003, which might have bolstered the steady growth of garment exports in the past, and possibly future growth, too.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Around ten years ago investigation of technical and material construction in Ancient Roma has advanced in favour to obtain positive results. This process has been directed to obtaining some dates based in chemical composition, also action and reaction of materials against meteorological assaults or post depositional displacements. Plenty of these dates should be interpreted as a result of deterioration and damage in concrete material made in one landscape with some kind of meteorological characteristics. Concrete mixture like calcium and gypsum mortars should be analysed in laboratory test programs, and not only with descriptions based in reference books of Strabo, Pliny the Elder or Vitruvius. Roman manufacture was determined by weather condition, landscape, natural resources and of course, economic situation of the owner. In any case we must research the work in every facts of construction. On the one hand, thanks to chemical techniques like X-ray diffraction and Optical microscopy, we could know the granular disposition of mixture. On the other hand if we develop physical and mechanical techniques like compressive strength, capillary absorption on contact or water behaviour, we could know the reactions in binder and aggregates against weather effects. However we must be capable of interpret these results. Last year many analyses developed in archaeological sites in Spain has contributed to obtain different point of view, so has provide new dates to manage one method to continue the investigation of roman mortars. If we developed chemical and physical analysis in roman mortars at the same time, and we are capable to interpret the construction and the resources used, we achieve to understand the process of construction, the date and also the way of restoration in future.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Separating programs into modules is a well-known technique which has proven very useful in program development and maintenance. Starting by introducing a number of possible scenarios, in this paper we study different issues which appear when developing analysis and specialization techniques for modular logic programming. We discuss a number of design alternatives and their consequences for the different scenarios considered and describe where applicable the decisions made in the Ciao system analyzer and specializer. In our discussion we use the module system of Ciao Prolog. This is both for concreteness and because Ciao Prolog is a second-generation Prolog system which has been designed with global analysis and specialization in mind, and which has a strict module system. The aim of this work is not to provide a theoretical basis on modular analysis and specialization, but rather to discuss some interesting practical issues.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The HiPER project, phase 4a, is evolving. In this study we present the progress made in the field of neutronics and radiological protection for an integrated design of the facility. In the current model, we take into account the optical systems inside the target bay, as well as the remote handling requirements and related infrastructure, together with different shields. The last reference irradiation scenario, consisting of 20 MJ of neutron yields, 5 yields per burst, one burst every week and 30 years of expected lifetime is considered for this study. We have performed a characterization of the dose rates behavior in the facility, both during operation and between bursts. The dose rates are computed for workers, regarding to maintenance and handling, and also for optical systems, regarding to damage. Furthermore, we have performed a waste management assessment of all the components inside the target bay. Results indicate that remote maintenance is mandatory in some areas. The small beam penetrations in the shields are responsible for some high doses in some specific locations. With regards to optics, the residual doses are as high as prompt doses. It is found that the whole target bay may be fully managed as a waste in 30 years by recycling and/or clearance, with no need for burial.