909 resultados para Interval analysis (Mathematics)
Resumo:
A novel three-dimensional finite volume (FV) procedure is described in detail for the analysis of geometrically nonlinear problems. The FV procedure is compared with the conventional finite element (FE) Galerkin approach. FV can be considered to be a particular case of the weighted residual method with a unit weighting function, where in the FE Galerkin method we use the shape function as weighting function. A Fortran code has been developed based on the finite volume cell vertex formulation. The formulation is tested on a number of geometrically nonlinear problems. In comparison with FE, the results reveal that FV can reach the FE results in a higher mesh density.
Resumo:
The liquid metal flow in induction crucible models is known to be unstable, turbulent and difficult to predict in the regime of medium frequencies when the electromagnetic skin-layer is of considerable extent. We present long term turbulent flow measurements by a permanent magnet incorporated potential difference velocity probe in a cylindrical container filled with eutectic melt In-Ga-Sn. The parallel numerical simulation of the long time scale development of the turbulent average flow is presented. The numerical flow model uses an implicit pseudo-spectral code and k-w turbulence model, which was recently developed for the transitional flow modelling. The results compare reasonably to the experiment and demonstrate the time development of the turbulent flow field and the turbulence energy.
Resumo:
Network analysis is distinguished from traditional social science by the dyadic nature of the standard data set. Whereas in traditional social science we study monadic attributes of individuals, in network analysis we study dyadic attributes of pairs of individuals. These dyadic attributes (e.g. social relations) may be represented in matrix form by a square 1-mode matrix. In contrast, the data in traditional social science are represented as 2-mode matrices. However, network analysis is not completely divorced from traditional social science, and often has occasion to collect and analyze 2-mode matrices. Furthermore, some of the methods developed in network analysis have uses in analysing non-network data. This paper presents and discusses ways of applying and interpreting traditional network analytic techniques to 2-mode data, as well as developing new techniques. Three areas are covered in detail: displaying 2-mode data as networks, detecting clusters and measuring centrality.
Resumo:
Book review of: Chance Encounters: A First Course in Data Analysis and Inference by Christopher J. Wild and George A.F. Seber 2000, John Wiley & Sons Inc. Hard-bound, xviii + 612 pp ISBN 0-471-32936-3
Resumo:
Most of the air quality modelling work has been so far oriented towards deterministic simulations of ambient pollutant concentrations. This traditional approach, which is based on the use of one selected model and one data set of discrete input values, does not reflect the uncertainties due to errors in model formulation and input data. Given the complexities of urban environments and the inherent limitations of mathematical modelling, it is unlikely that a single model based on routinely available meteorological and emission data will give satisfactory short-term predictions. In this study, different methods involving the use of more than one dispersion model, in association with different emission simulation methodologies and meteorological data sets, were explored for predicting best CO and benzene estimates, and related confidence bounds. The different approaches were tested using experimental data obtained during intensive monitoring campaigns in busy street canyons in Paris, France. Three relative simple dispersion models (STREET, OSPM and AEOLIUS) that are likely to be used for regulatory purposes were selected for this application. A sensitivity analysis was conducted in order to identify internal model parameters that might significantly affect results. Finally, a probabilistic methodology for assessing urban air quality was proposed.
Resumo:
Trend analysis is widely used for detecting changes in hydrological data. Parametric methods for this employ pre-specified models and associated tests to assess significance, whereas non-parametric methods generally apply rank tests to the data. Neither approach is suitable for exploratory analysis, because parametric models impose a particular, perhaps unsuitable, form of trend, while testing may confirm that trend is present but does not describe its form. This paper describes semi-parametric approaches to trend analysis using local likelihood fitting of annual maximum and partial duration series and illustrates their application to the exploratory analysis of changes in extremes in sea level and river flow data. Bootstrap methods are used to quantify the variability of estimates.
Resumo:
Forest fires can cause extensive damage to natural resources and properties. They can also destroy wildlife habitat, affect the forest ecosystem and threaten human lives. In this paper extreme wildland fires are analysed using a point process model for extremes. The model based on a generalised Pareto distribution is used to model data on acres of wildland burnt by extreme fire in the US since 1825. A semi-parametric smoothing approach is adapted with maximum likelihood method to estimate model parameters.
Resumo:
This paper presents data relating to occupant pre-evacuation times from university and hospital outpatient facilities. Although the two occupancies are entirely different, they do employ relatively similar procedures: members of staff sweep areas to encourage individuals to evacuate.However the manner in which the dependent population reacts to these procedures is quite different. In the hospital case, the patients only evacuated once a member of the nursing staff had instructed them to do so, while in the university evacuation, the students were less dependent upon the actions of the staff, with over 50% of them evacuating with no prior prompting. In addition, the student pre-evacuation time was found to be dependent on their level of engagement in various activities.
Resumo:
Identification, when sought, is not necessarily obtained. Operational guidance that is normatively acceptable may be necessary for such cases. We proceed to formalize and illustrate modes of exchanges of individual identity, and provide procedures of recovery strategies in specific prescriptions from an ancient body of law for such situations when, for given types of purposes, individuals of some relevant kind had become intermixed and were undistinguishable. Rules were devised, in a variety of domains, for coping with situations that occur if and when the goal of identification was frustrated. We propose or discuss mathematical representations of such recovery procedures.
Resumo:
In 1998, Swissair Flight I I I (SR111) developed an in-flight fire shortly after take-off which resulted in the loss of the aircraft, a McDonnell Douglas MD-I 1, and all passengers and crew. The Transportation Safety Board (TSB) of Canada, Fire and Explosion Group launched a four year investigation into the incident in an attempt to understand the cause and subsequent mechanisms which lead to the rapid spread of the in-flight fire. As part of this investigation, the SMARTFIRE Computational Fluid Dynamics (CFD) software was used to predict the 'possible' development of the fire and associated smoke movement. In this paper the CFD fire simulations are presented and model predictions compared with key findings from the investigation. The model predictions are shown to be consistent with a number of the investigation findings associated with the early stages of the fire development. The analysis makes use of simulated pre-fire airflow conditions within the MD-11 cockpit and above ceiling region presented in an earlier publication (Part 1) which was published in The Aeronautical Journal in January 2006(4).
Resumo:
This paper presents a continuum model of the flow of granular material during filling of a silo, using a viscoplastic constitutive relation based on the Drucker-Prager plasticity yield function. The performed simulations demonstrate the ability of the model to realistically represent complex features of granular flows during filling processes, such as heap formation and non-zero inclination angle of the bulk material-air interface. In addition, micro-mechanical parametrizations which account for particle size segregation are incorporated into the model. It is found that numerical predictions of segregation phenomena during filling of a binary granular mixture agree well with experimental results. Further numerical tests indicate the capability of the model to cope successfully with complex operations involving granular mixtures.
Resumo:
Problems in the preservation of the quality of granular material products are complex and arise from a series of sources during transport and storage. In either designing a new plant or, more likely, analysing problems that give rise to product quality degradation in existing operations, practical measurement and simulation tools and technologies are required to support the process engineer. These technologies are required to help in both identifying the source of such problems and then designing them out. As part of a major research programme on quality in particulate manufacturing computational models have been developed for segregation in silos, degradation in pneumatic conveyors, and the development of caking during storage, which use where possible, micro-mechanical relationships to characterize the behaviour of granular materials. The objective of the work presented here is to demonstrate the use of these computational models of unit processes involved in the analysis of large-scale processes involving the handling of granular materials. This paper presents a set of simulations of a complete large-scale granular materials handling operation, involving the discharge of the materials from a silo, its transport through a dilute-phase pneumatic conveyor, and the material storage in a big bag under varying environmental temperature and humidity conditions. Conclusions are drawn on the capability of the computational models to represent key granular processes, including particle size segregation, degradation, and moisture migration caking.
Resumo:
This paper details the computational methodology for analysis of the structural behaviour of historic composite structures. The modelling approach is based on finite element analysis and has been developed to aid the efficient and inexpensive computational mechanics of complex composite structures. The discussion is primarily focussed on the modelling methodology and analysis of structural designs that comprise of structural beam components acting as stiffeners to a wider shell part of the structure. A computational strategy for analysis of this type of composite structures that exploits their representation through smeared shell models is detailed in the paper.
Resumo:
Micro-electronic displays are sensitive devices and its performance is easily affected by external environmental factors. To enable the display to perform in extreme conditions, the device must be structurally strengthened, the effects of this packaging process was investigated. A thermo-mechanical finite element analysis was used to discover potential problems in the packaging process and to improve the overall design of the device. The main concern from the analysis predicted that displacement of the borosilicate glass and the Y stress of the adhesive are important. Using this information a design which reduced the variation of displacement and kept the stress to a minimum was suggested
Resumo:
This paper presents modeling results about the performance of flexible substrates when subjected to higher lead-free reflow temperatures. Both adhesiveless and adhesive types of polyimide substrates were studied. Finite element (FE) models of flex substrates were built, two copper tracks located in the centre of the substrate was considered. The thermal induced shear stress in the flex substrate during the lead-free reflow process was studied and the effect of the design changes including the track thickness, flex thickness, and copper width were studied. For both types of flexes, the one of most important variables for minimizing damage to the substrate is the height of the copper tracks. The height of flex and the width of copper track show less impact. Beside of the geometry effects, the increase in reflow peak temperature can also result in a significant increase in the interfacial stress between the copper track and flex. Higher stresses were identified within the adhesive flex due to the big CTE mismatch between the copper and adhesive/dielectric