26 resultados para static recrystallization

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thermal annealing of amorphous tracks of nanometer-size diameter generated in lithium niobate (LiNbO3) by Bromine ions at 45 MeV, i.e., in the electronic stopping regime, has been investigated by RBS/C spectrometry in the temperature range from 250°C to 350°C. Relatively low fluences have been used (<1012 cm−2) to produce isolated tracks. However, the possible effect of track overlapping has been investigated by varying the fluence between 3×1011 cm−2 and 1012 cm−2. The annealing process follows a two-step kinetics. In a first stage (I) the track radius decreases linearly with the annealing time. It obeys an Arrhenius-type dependence on annealing temperature with activation energy around 1.5 eV. The second stage (II) operates after the track radius has decreased down to around 2.5 nm and shows a much lower radial velocity. The data for stage I appear consistent with a solid-phase epitaxial process that yields a constant recrystallization rate at the amorphous-crystalline boundary. HRTEM has been used to monitor the existence and the size of the annealed isolated tracks in the second stage. On the other hand, the thermal annealing of homogeneous (buried) amorphous layers has been investigated within the same temperature range, on samples irradiated with Fluorine at 20 MeV and fluences of ∼1014 cm−2. Optical techniques are very suitable for this case and have been used to monitor the recrystallization of the layers. The annealing process induces a displacement of the crystalline-amorphous boundary that is also linear with annealing time, and the recrystallization rates are consistent with those measured for tracks. The comparison of these data with those previously obtained for the heavily damaged (amorphous) layers produced by elastic nuclear collisions is summarily discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La norma UNE-EN 13374 “Sistemas provisionales de protección de borde. Especificaciones del producto, métodos de ensayo” (1) clasifica los sistemas provisionales de protección de borde (SPPB) en tres clases (A, B y C), en función del ángulo de la superficie de trabajo y de la altura de caída de la persona a proteger. Los sistemas clase A son los indicados cuando la inclinación de la superficie de trabajo es menor de 10º. La norma establece los requisitos de flecha y de resistencia de los SPPB. Los requisitos se pueden comprobar tanto analítica como experimentalmente. El objetivo del trabajo ha sido la evaluación del comportamiento de los SPPB utilizados habitualmente en las obras y establecer los cambios necesarios para que cumplan con la norma UNE-EN 13374. Para ello se han evaluado analítica y experimentalmente tres SPPB clase A, fabricados con acero S235. Los resultados obtenidos muestran que, el sistema empleado de forma habitual en obras no supera los requisitos de la norma ni analítica ni experimentalmente. El tercer sistema supera los requisitos con las dos metodologías de análisis. El segundo sistema supera los requisitos cuando la evaluación se realiza analíticamente pero no cuando la vía utilizada es la experimental.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Monte Carlo computer simulation technique, in which a continuum system is modeled employing a discrete lattice, has been applied to the problem of recrystallization. Primary recrystallization is modeled under conditions where the degree of stored energy is varied and nucleation occurs homogeneously (without regard for position in the microstructure). The nucleation rate is chosen as site saturated. Temporal evolution of the simulated microstructures is analyzed to provide the time dependence of the recrystallized volume fraction and grain sizes. The recrystallized volume fraction shows sigmoidal variations with time. The data are approximately fit by the Johnson-Mehl-Avrami equation with the expected exponents, however significant deviations are observed for both small and large recrystallized volume fractions. Under constant rate nucleation conditions, the propensity for irregular grain shapes is decreased and the density of two sided grains increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range with a finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, new uses of Unmanned Aerial Vehicles (UAVs) for civilian-security applications force SAR systems to work in much more complex scenes such as urban environments. Consequently, multiple-bounce returns are additionally superposed to direct-scatter echoes. They are known as ghost images, since they obscure true target image and lead to poor resolution. All this may involve a significant problem in applications related to surveillance and security. In this work, an innovative multipath mitigation technique is presented in which Time Reversal (TR) concept is applied to SAR images when the target is concealed in clutter, leading to TR-SAR technique. This way, the effect of multipath is considerably reduced ?or even removed?, recovering the lost resolution due to multipath propagation. Furthermore, some focusing indicators such as entropy (E), contrast (C) and Rényi entropy (RE) provide us with a good focusing criterion when using TR-SAR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the pecifications. Finally, we report briefly on the currently implemented instances of the generic framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of seismic hysteretic dampers for passive control is increasing exponentially in recent years for both new and existing buildings. In order to utilize hysteretic dampers within a structural system, it is of paramount importance to have simplified design procedures based upon knowledge gained from theoretical studies and validated with experimental results. Non-linear Static Procedures (NSPs) are presented as an alternative to the force-based methods more common nowadays. The application of NSPs to conventional structures has been well established; yet there is a lack of experimental information on how NSPs apply to systems with hysteretic dampers. In this research, several shaking table tests were conducted on two single bay and single story 1:2 scale structures with and without hysteretic dampers. The maximum response of the structure with dampers in terms of lateral displacement and base shear obtained from the tests was compared with the prediction provided by three well-known NSPs: (1) the improved version of the Capacity Spectrum Method (CSM) from FEMA 440; (2) the improved version of the Displacement Coefficient Method (DCM) from FEMA 440; and (3) the N2 Method implemented in Eurocode 8. In general, the improved version of the DCM and N2 methods are found to provide acceptable accuracy in prediction, but the CSM tends to underestimate the response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance tradeoff.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An AZ31 rolled sheet alloy has been tested at dynamic strain rates View the MathML source at 250 °C up to various intermediate strains before failure in order to investigate the predominant deformation and restoration mechanisms. In particular, tests have been carried out in compression along the rolling direction (RD), in tension along the RD and in compression along the normal direction (ND). It has been found that dynamic recrystallization (DRX) takes place despite the limited diffusion taking place under the high strain rates investigated. The DRX mechanisms and kinetics depend on the operative deformation mechanisms and thus vary for different loading modes (tension, compression) as well as for different relative orientations between the loading axis and the c-axes of the grains. In particular, DRX is enhanced by the operation of 〈c + a〉 slip, since cross-slip and climb take place more readily than for other slip systems, and thus the formation of high angle boundaries is easier. DRX is also clearly promoted by twinning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pressure wave is generated when a high speed train enters a tunnel. This wave travels along the tunnel back and forth, and is reflected at the irregularities of the tunnel duct (section changes, chimneys and tunnel ends). The pressure changes are associated to these waves can have an effect on passengers if the trains are not suitably sealed or pressurized. The intensity of the waves depends mainly on the train speed, and on the blockage ratio (train-section-to- tunnel-section area ratio). As the intensity of the waves is limited by regulations, and also by the effects on passengers and infrastructures, the sizing of the tunnel section area is largely influenced by the maximum train speed allowed in the tunnel. The aim of this study is to analyse the increase in cost in a tunnel due to the existence of this difference in ground level, and evaluate the increase of construction costs that this elevation might involve.