5 resultados para Bulk segregant analysis
em Universidad Politécnica de Madrid
Resumo:
Análisis de la atenuación del oleaje por un carguero funcionando como dique flotante y aplicación a dos casos de protección portuaria y costera. The effectiveness of a bulk carrier working as a detached floating breakwater to protect a stretch of coast and form salients or tombolos is assessed in this paper. Experiments were conducted in the Madrid CEDEX facilities in a 30 m long, 3 m wide, 1/150 scale flume. The bulk carrier ship is 205 m long, 29 m wide and 18 m in height with a draught of 13 m, and has been subjected to irregular waves with significant heights from 2 m to 4 m and peak periods from 6 s to 12 s at a depth of 15 m, all prototype dimensions. Three probes were placed between the wave paddle and the ship to record incident and reflected waves and four probes were placed between the ship and the coastline to measure the transmitted waves. Transmission, reflection and dissipation coefficients (Ct, Cr, Cd) were calculated to determine wave attenuation. Results show good shelter in the lee of the ship with values of Ct under 0.5 for peak periods from 6 s to 11 s. In addition, forces on the mooring chains were measured showing maximum values of about 2000 tons at a 10 speak period. Finally, two analytical models were used to determine the shoreline’s response to the ship’s protection and to assess the possible forming of salients or tombolos. According to the results, salients - but not tombolos - are formed in all tests.
Resumo:
NASA's tether experiment ProSEDS will be placed in orbit on board a Delta-II rocket in early 2003. ProSEDS will test bare-tether electron collection, deorbiting of the rocket second stage, and the system dynamic stability. ProSEDS performance will vary both because ambient conditions change along the orbit and because tether-circuit parameters follow a step by step sequence in the current operating cycle. In this work we discuss how measurements of tether current and bias, plasma density, and deorbiting rate can be used to check the OML law for current collection. We review circuit bulk elements; characteristic lengths and energies that determine collection (tether radius, electron thermal gyroradius and Debye length, particle temperatures, tether bias, ion ram energy); and lengths determining current and bias profiles along the tether (extent of magnetic self-field, a length gauging ohmic versus collection impedances, tether length). The analysis serves the purpose of estimating ProSEDS behavior in orbit and fostering our ability for extrapolating ProSEDS flight data to different tether and environmental conditions.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
In this study two YBa2Cu3O7−δ bulk superconductors were evaluated, with the aim of analyzing the influence of the processing method (TSMG and Bridgman) and the test temperature on their mechanical behavior. The relationship between their mechanical properties and fracture micromechanisms has also been studied. Both materials were tested at room and at service temperature. TPB tests were carried out to determine their mechanical behavior, strength and toughness. Moreover, one of the two materials, characterized by transversal microstructural anisotropy, was tested in two directions. Hardness of both materials at nano and micro scale was studied. The results show that the mechanical behavior of the materials is controlled by the defects and cracks that have been introduced during the processing of the materials. A good degree of agreement was found between the experimental crack defects detected by means of SEM and those gathered from the fracture mechanical analysis of the experimental data
Resumo:
An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.