991 resultados para Reliability Modelling
Resumo:
The new reactor concepts proposed in the Generation IV International Forum require the development and validation of computational tools able to assess their safety performance. In the first part of this paper the models of the ESFR design developed by several organisations in the framework of the CP-ESFR project were presented and their reliability validated via a benchmarking exercise. This second part of the paper includes the application of those tools for the analysis of design basis accident (DBC) scenarios of the reference design. Further, this paper also introduces the main features of the core optimisation process carried out within the project with the objective to enhance the core safety performance through the reduction of the positive coolant density reactivity effect. The influence of this optimised core design on the reactor safety performance during the previously analysed transients is also discussed. The conclusion provides an overview of the work performed by the partners involved in the project towards the development and enhancement of computational tools specifically tailored to the evaluation of the safety performance of the Generation IV innovative nuclear reactor designs.
Resumo:
In the present work, a three-dimensional (3D) formulation based on the method of fundamental solutions (MFS) is applied to the study of acoustic horns. The implemented model follows and extends previous works that only considered two-dimensional and axisymmetric horn configurations. The more realistic case of 3D acoustic horns with symmetry regarding two orthogonal planes is addressed. The use of the domain decomposition technique with two interconnected sub-regions along a continuity boundary is proposed, allowing for the computation of the sound pressure generated by an acoustic horn installed on a rigid screen. In order to reduce the model discretization requirements for these cases, Green’s functions derived with the image source methodology are adopted, automatically accounting for the presence of symmetry conditions. A strategy for the calculation of an optimal position of the virtual sources used by the MFS to define the solution is also used, leading to improved reliability and flexibility of the proposed method. The responses obtained by the developed model are compared to reference solutions, computed by well-established models based on the boundary element method. Additionally, numerically calculated acoustic parameters, such as directivity and beamwidth, are compared with those evaluated experimentally.
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
Biotic interactions can have large effects on species distributions yet their role in shaping species ranges is seldom explored due to historical difficulties in incorporating biotic factors into models without a priori knowledge on interspecific interactions. Improved SDMs, which account for biotic factors and do not require a priori knowledge on species interactions, are needed to fully understand species distributions. Here, we model the influence of abiotic and biotic factors on species distribution patterns and explore the robustness of distributions under future climate change. We fit hierarchical spatial models using Integrated Nested Laplace Approximation (INLA) for lagomorph species throughout Europe and test the predictive ability of models containing only abiotic factors against models containing abiotic and biotic factors. We account for residual spatial autocorrelation using a conditional autoregressive (CAR) model. Model outputs are used to estimate areas in which abiotic and biotic factors determine species’ ranges. INLA models containing both abiotic and biotic factors had substantially better predictive ability than models containing abiotic factors only, for all but one of the four species. In models containing abiotic and biotic factors, both appeared equally important as determinants of lagomorph ranges, but the influences were spatially heterogeneous. Parts of widespread lagomorph ranges highly influenced by biotic factors will be less robust to future changes in climate, whereas parts of more localised species ranges highly influenced by the environment may be less robust to future climate. SDMs that do not explicitly include biotic factors are potentially misleading and omit a very important source of variation. For the field of species distribution modelling to advance, biotic factors must be taken into account in order to improve the reliability of predicting species distribution patterns both presently and under future climate change.
Resumo:
Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.
Resumo:
A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.
Resumo:
It is important to assess young children's perceived Fundamental Movement Skill (FMS) competence in order to examine the role of perceived FMS competence in motivation toward physical activity. Children's perceptions of motor competence may vary according to the culture/country of origin; therefore, it is also important to measure perceptions in different cultural contexts. The purpose was to assess the face validity, internal consistency, test–retest reliability and construct validity of the 12 FMS items in the Pictorial Scale for Perceived Movement Skill Competence for Young Children (PMSC) in a Portuguese sample. Methods Two hundred one Portuguese children (girls, n = 112), 5 to 10 years of age (7.6 ± 1.4), participated. All children completed the PMSC once. Ordinal alpha assessed internal consistency. A random subsamples (n = 47) were reassessed one week later to determine test–retest reliability with Bland–Altman method. Children were asked questions after the second administration to determine face validity. Construct validity was assessed on the whole sample with a Bayesian Structural Equation Modelling (BSEM) approach. The hypothesized theoretical model used the 12 items and two hypothesized factors: object control and locomotor skills. Results The majority of children correctly identified the skills and could understand most of the pictures. Test–retest reliability analysis was good, with an agreement ration between 0.99 and 1.02. Ordinal alpha values ranged from acceptable (object control 0.73, locomotor 0.68) to good (all FMS 0.81). The hypothesized BSEM model had an adequate fit. Conclusions The PMSC can be used to investigate perceptions of children's FMS competence. This instrument can also be satisfactorily used among Portuguese children.
Resumo:
Objectives: Because there is scientific evidence that an appropriate intake of dietary fibre should be part of a healthy diet, given its importance in promoting health, the present study aimed to develop and validate an instrument to evaluate the knowledge of the general population about dietary fibres. Study design: The present study was a cross sectional study. Methods: The methodological study of psychometric validation was conducted with 6010 participants, residing in ten countries from 3 continents. The instrument is a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. For exploratory factor analysis (EFA) was chosen the analysis of the main components using varimax orthogonal rotation and eigenvalues greater than 1. In confirmatory factor analysis by structural equation modelling (SEM) was considered the covariance matrix and adopted the Maximum Likelihood Estimation algorithm for parameter estimation. Results: Exploratory factor analysis retained two factors. The first was called Dietary Fibre and Promotion of Health (DFPH) and included 7 questions that explained 33.94 % of total variance ( = 0.852). The second was named Sources of Dietary Fibre (SDF) and included 4 questions that explained 22.46% of total variance ( = 0.786). The model was tested by SEM giving a final solution with four questions in each factor. This model showed a very good fit in practically all the indexes considered, except for the ratio 2/df. The values of average variance extracted (0.458 and 0.483) demonstrate the existence of convergent validity; the results also prove the existence of discriminant validity of the factors (r2 = 0.028) and finally good internal consistency was confirmed by the values of composite reliability (0.854 and 0.787). Conclusions: This study allowed validating the KADF scale, increasing the degree of confidence in the information obtained through this instrument in this and in future studies.
Resumo:
This thesis aims to understand the behavior of a low-rise unreinforced masonry building (URM), the typical residential house in the Netherlands, when subjected to low-intensity earthquakes. In fact, in the last decades, the Groningen region was hit by several shallow earthquakes caused by the extraction of natural gas. In particular, the focus is addressed to the internal non-structural walls and to their interaction with the structural parts of the building. A simple and cost-efficient 2D FEM model is developed, focused on the interfaces representing mortar layers that are present between the non-structural walls and the rest of the structure. As a reference for geometries and materials, it has been taken into consideration a prototype that was built in full-scale at the EUCENTRE laboratory of Pavia (Italy). Firstly, a quasi-static analysis is performed by gradually applying a prescribed displacement on the roof floor of the structure. Sensitivity analyses are conducted on some key parameters characterizing mortar. This analysis allows for the calibration of their values and the evaluation of the reliability of the model. Successively, a transient analysis is performed to effectively subject the model to a seismic action and hence also evaluate the mechanical response of the building over time. Moreover, it was possible to compare the results of this analysis with the displacements recorded in the experimental tests by creating a model representing the entire considered structure. As a result, some conditions for the model calibration are defined. The reliability of the model is then confirmed by both the reasonable results obtained from the sensitivity analysis and the compatibility of the values obtained for the top displacement of the roof floor of the experimental test, and the same value acquired from the structural model.