943 resultados para Crash Predictions
Resumo:
Progress in simulating chevron nozzle jet flows using ILES/RANS-ILES approaches and using the Ffowcs Williams and Hawkings (FW-H) surface integral method to predict the radiated far field sound is presented in this paper. With the focus on the realistic chevron geometries, SMC001 and SMC006, coarse and fine meshes are generated in the range of 3∼13 million mesh cells. Throughout this work, to minimize numerical dissipation introduced by mesh quality issues, the hexahedral cell type is used. Numerical simulations are then carried out with cell-vertex and cell-centered codes. Despite the modest grids, mean velocities and turbulent statistics are found to be in reasonable accord with measurements. Also, far field sound levels predicted by the FW-H post processor are encouraging. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
We describe a first-principles-based strategy to predict the macroscopic toughness of a gamma-Ni(Al)/alpha-Al2O3 interface. Density functional theory calculations are used to ascertain energy changes upon displacing the two materials adjacent to the interface, with relaxation conducted over all atoms located within adjoining rows. Traction/displacernent curves are obtained from derivatives of the energy. Calculations are performed in mode I (opening), mode II (shear) and at a phase angle of 45 degrees. The shear calculations are conducted for displacements along < 110 > and < 112 > of the Ni lattice. A generalized interface potential function is used to characterize the results. Initial fitting to both the shear and normal stress results is required to calibrate the unknowns. Thereafter, consistency is established by using the potential to predict other traction quantities. The potential is incorporated as a traction/displacement function within a cohesive zone model and used to predict the steady-state toughness of the interface. For this purpose, the plasticity of the Ni alloy must be known, including the plasticity length scale. Measurements obtained for a gamma-Ni superalloy are used and the toughness predicted over the full range of mode mixity. Additional results for a range of alloys are used to demonstrate the influences of yield strength and length scale.
Resumo:
We describe a first-principles-based strategy to predict the macroscopic toughness of a gamma-Ni(Al)/alpha-Al2O3 interface. Density functional theory calculations are used to ascertain energy changes upon displacing the two materials adjacent to the interface, with relaxation conducted over all atoms located within adjoining rows. Traction/displacernent curves are obtained from derivatives of the energy. Calculations are performed in mode I (opening), mode II (shear) and at a phase angle of 45 degrees. The shear calculations are conducted for displacements along < 110 > and < 112 > of the Ni lattice. A generalized interface potential function is used to characterize the results. Initial fitting to both the shear and normal stress results is required to calibrate the unknowns. Thereafter, consistency is established by using the potential to predict other traction quantities. The potential is incorporated as a traction/displacement function within a cohesive zone model and used to predict the steady-state toughness of the interface. For this purpose, the plasticity of the Ni alloy must be known, including the plasticity length scale. Measurements obtained for a gamma-Ni superalloy are used and the toughness predicted over the full range of mode mixity. Additional results for a range of alloys are used to demonstrate the influences of yield strength and length scale.
Resumo:
The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the alpha-decay half-lives of superheavy nuclei (SHN) using the experimental alpha-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (Q(Audi)) and Muntian et al. (Q(M)) have been tested to find that the cluster model with Q(Audi) and Q(M) could provide reliable results for Z > 112 but the GLDM with Q(Audi) for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.
Resumo:
Geoacoustic properties of the seabed have a controlling role in the propagation and reverberation of sound in shallow-water environments. Several techniques are available to quantify the important properties but are usually unable to adequately sample the region of interest. In this paper, we explore the potential for obtaining geotechnical properties from a process-based stratigraphic model. Grain-size predictions from the stratigraphic model are combined with two acoustic models to estimate sound speed with distance across the New Jersey continental shelf and with depth below the seabed. Model predictions are compared to two independent sets of data: 1) Surficial sound speeds obtained through direct measurement using in situ compressional wave probes, and 2) sound speed as a function of depth obtained through inversion of seabed reflection measurements. In water depths less than 100 m, the model predictions produce a trend of decreasing grain-size and sound speed with increasing water depth as similarly observed in the measured surficial data. In water depths between 100 and 130 m, the model predictions exhibit an increase in sound speed that was not observed in the measured surficial data. A closer comparison indicates that the grain-sizes predicted for the surficial sediments are generally too small producing sound speeds that are too slow. The predicted sound speeds also tend to be too slow for sediments 0.5-20 m below the seabed in water depths greater than 100 m. However, in water depths less than 100 m, the sound speeds between 0.5-20-m subbottom depth are generally too fast. There are several reasons for the discrepancies including the stratigraphic model was limited to two dimensions, the model was unable to simulate biologic processes responsible for the high sound-speed shell material common in the model area, and incomplete geological records necessary to accurately predict grain-size
Resumo:
In the first part of this paper we show that a new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute a satisfactory estimation of the optical flow field. The algorithm is well-suited to VLSI implementations. The sparse measurements provided by the technique can be used to compute qualitative properties of the flow for a number of different visual tsks. In particular, the second part of the paper shows how to combine our 1D correlation technique with a scheme for detecting expansion or rotation ([5]) in a simple algorithm which also suggests interesting biological implications. The algorithm provides a rough estimate of time-to-crash. It was tested on real image sequences. We show its performance and compare the results to previous approaches.
Resumo:
King, R. D. and Wise, P. H. and Clare, A. (2004) Confirmation of Data Mining Based Predictions of Protein Function. Bioinformatics 20(7), 1110-1118
Resumo:
Barker, M.; Arthurs, J. and Harindranath, R. (Eds.). (2001). Controversy: Censorship Campaigns and Film Reception. London: Wallflower Press. RAE2008
Resumo:
This thesis argues that through the prism of America’s Cold War, scientism has emerged as the metanarrative of the postnuclear age. The advent of the bomb brought about a new primacy for mechanical and hyperrational thinking in the corridors of power not just in terms of managing the bomb itself but diffusing this ideology throughout the culture in social sciences, economics and other such institutional systems. The human need to mitigate or ameliorate against the chaos of the universe lies at the heart of not just religious faith but in the desire for perfect control. Thus there has been a transference of power from religious faith to the apparent material power of science and technology and the terra firma these supposedly objective means supply. The Cold War, however was a highly ideologically charged opposition between the two superpowers, and the scientific methodology that sprang forth to manage the Cold War and the bomb, in the United States, was not an objective scientific system divorced from the paranoia and dogma but a system that assumed a radically fundamentalist idea of capitalism. This is apparent in the widespread diffusion of game theory throughout Western postindustrial institutions. The inquiry of the thesis thus examines the texts that engage and criticise American Cold War methodology, beginning with the nuclear moment, so to speak, and Dr Strangelove’s incisive satire of moral abdication to machine processes. Moving on chronologically, the thesis examines the diffusion of particular kinds of masculinity and sexuality in postnuclear culture in Crash and End Zone and finishing up its analysis with the ethnographic portrayal of a modern American city in The Wire. More than anything else, the thesis wishes to reveal to what extent this technocratic consciousness puts pressure on language and on binding narratives.
Resumo:
The binary A(8)B phase (prototype Pt(8)Ti) has been experimentally observed in 11 systems. A high-throughput search over all the binary transition intermetallics, however, reveals 59 occurrences of the A(8)B phase: Au(8)Zn(dagger), Cd(8)Sc(dagger), Cu(8)Ni(dagger), Cu(8)Zn(dagger), Hg(8)La, Ir(8)Os(dagger), Ir(8)Re, Ir(8)Ru(dagger), Ir(8)Tc, Ir(8)W(dagger), Nb(8)Os(dagger), Nb(8)Rh(dagger), Nb(8)Ru(dagger), Nb(8)Ta(dagger), Ni(8)Fe, Ni(8)Mo(dagger)*, Ni(8)Nb(dagger)*, Ni(8)Ta*, Ni(8)V*, Ni(8)W, Pd(8)Al(dagger), Pd(8)Fe, Pd(8)Hf, Pd(8)Mn, Pd(8)Mo*, Pd(8)Nb, Pd(8)Sc, Pd(8)Ta, Pd(8)Ti, Pd(8)V*, Pd(8)W*, Pd(8)Zn, Pd(8)Zr, Pt(8)Al(dagger), Pt(8)Cr*, Pt(8)Hf, Pt(8)Mn, Pt(8)Mo, Pt(8)Nb, Pt(8)Rh(dagger), Pt(8)Sc, Pt(8)Ta, Pt(8)Ti*, Pt(8)V*, Pt(8)W, Pt(8)Zr*, Rh(8)Mo, Rh(8)W, Ta(8)Pd, Ta(8)Pt, Ta(8)Rh, V(8)Cr(dagger), V(8)Fe(dagger), V(8)Ir(dagger), V(8)Ni(dagger), V(8)Pd, V(8)Pt, V(8)Rh, and V(8)Ru(dagger) ((dagger) = metastable, * = experimentally observed). This is surprising for the wealth of new occurrences that are predicted, especially in well-characterized systems (e.g., Cu-Zn). By verifying all experimental results while offering additional predictions, our study serves as a striking demonstration of the power of the high-throughput approach. The practicality of the method is demonstrated in the Rh-W system. A cluster-expansion-based Monte Carlo model reveals a relatively high order-disorder transition temperature.
Resumo:
The need for nuclear data far from the valley of stability, for applications such as nuclear as- trophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in funda- mental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological mod- els, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally.
Resumo:
In this article, the buildingEXODUS (V1.1) evacuation model is described and discussed and attempts at qualitative and quantitative model validation are presented. The data set used for the validation is the Tsukuba pavilion evacuation data. This data set is of particular interest as the evacuation was influenced by external conditions, namely inclement weather. As part of the validation exercise, the sensitivity of the buildingEXODUS predictions to a range of variables and conditions is examined, including: exit flow capacity, occupant response times, and the impact of external conditions on the developing evacuation. The buildingEXODUS evacuation model was found to produce good qualitative and quantitative agreement with the experimental data.
Resumo:
In this paper a continuum model for the prediction of segregation in granular material is presented. The numerical framework, a 3-D, unstructured grid, finite-volume code is described, and the micro-physical parametrizations, which are used to describe the processes and interactions at the microscopic level that lead to segregation, are analysed. Numerical simulations and comparisons with experimental data are then presented and conclusions are drawn on the capability of the model to accurately simulate the behaviour of granular matter during flow.