79 resultados para implicit dynamic analysis
Resumo:
Traditionally the simulation of the thermodynamic aspects of the internal combustion engine has been undertaken using one-dimensional gas-dynamic models to represent the intake and exhaust systems. CFD analysis of engines has been restricted to modelling of in-cylinder flow structures. With the increasing accessibility of CFD software it is now worth considering its use for complete gas-dynamic engine simulation. This paper appraises the accuracy of various CFD models in comparison to a 1D gas-dynamic simulation. All of the models are compared to experimental data acquired on an apparatus that generates a single gas-dynamic pressure wave. The progress of the wave along a constant area pipe and its subsequent reflection from the open pipe end are recorded with a number of high speed pressure transducers. It was found that there was little to choose between the accuracy of the 1D model and the best CFD model. The CFD model did not require experimentally derived loss coefficients to accurately represent the open pipe end; however, it took several hundred times longer to complete its analysis. The best congruency between the CFD models and the experimental data was achieved using the RNG k-e turbulence model. The open end of the pipe was most effectively represented by surrounding it with a relatively small volume of cells connected to the rest of the environment using a pressure boundary.
Resumo:
In this study, the surface properties of and work required to remove 12 commercially available and developmental catheters from a model biological medium (agar), a measure of catheter lubricity, were characterised and the relationships between these properties were examined using multiple regression and correlation analysis. The work required for removal of catheter sections (7 cm) from a model biological medium (1% w/w agar) were examined using tensile analysis. The water wettability of the catheters were characterised using dynamic contact angle analysis, whereas surface roughness was determined using atomic force microscopy. Significant differences in the ease of removal were observed between the various catheters, with the silicone-based materials generally exhibiting the greatest ease of removal. Similarly, the catheters exhibited a range of advancing and receding contact angles that were dependent on the chemical nature of each catheter. Finally, whilst the microrugosities of the various catheters differed, no specific relationship to the chemical nature of the biomaterial was apparent. Using multiple regression analysis, the relationship between ease of removal, receding contact angle and surface roughness was defined as: Work done (N mm) 17.18 + 0.055 Rugosity (nm)-0.52 Receding contact angle (degrees) (r = 0.49). Interestingly, whilst the relationship between ease of removal and surface roughness was significant (r = 0.48, p = 0.0005), in which catheter lubricity increased as the surface roughness decreased, this was not the case with the relationship between ease of removal and receding contact angle (r = -0.18, p > 0.05). This study has therefore uniquely defined the contributions of each of these surface properties to catheter lubricity. Accordingly, in the design of urethral catheters. it is recommended that due consideration should be directed towards biomaterial surface roughness to ensure maximal ease of catheter removal. Furthermore, using the method described in this study, differences in the lubricity of the various catheters were observed that may be apparent in their clinical use. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper points out a serious flaw in dynamic multivariate statistical process control (MSPC). The principal component analysis of a linear time series model that is employed to capture auto- and cross-correlation in recorded data may produce a considerable number of variables to be analysed. To give a dynamic representation of the data (based on variable correlation) and circumvent the production of a large time-series structure, a linear state space model is used here instead. The paper demonstrates that incorporating a state space model, the number of variables to be analysed dynamically can be considerably reduced, compared to conventional dynamic MSPC techniques.
Resumo:
This paper investigates the two-stage stepwise identification for a class of nonlinear dynamic systems that can be described by linear-in-the-parameters models, and the model has to be built from a very large pool of basis functions or model terms. The main objective is to improve the compactness of the model that is obtained by the forward stepwise methods, while retaining the computational efficiency. The proposed algorithm first generates an initial model using a forward stepwise procedure. The significance of each selected term is then reviewed at the second stage and all insignificant ones are replaced, resulting in an optimised compact model with significantly improved performance. The main contribution of this paper is that these two stages are performed within a well-defined regression context, leading to significantly reduced computational complexity. The efficiency of the algorithm is confirmed by the computational complexity analysis, and its effectiveness is demonstrated by the simulation results.
Resumo:
A comparative molecular field analysis (CoMFA) of alkanoic acid 3-oxo-cyclohex-1-enyl ester and 2-acylcyclohexane-1,3-dione derivatives of 4-hydroxyphenylpyruvate dioxygenase inhibitors has been performed to determine the factors required for the activity of these compounds. The substrate's conformation abstracted from dynamic modeling of the enzyme-substrate complex was used to build the initial structures of the inhibitors. Satisfactory results were obtained after an all-space searching procedure, performing a leave-one out (LOO) cross-validation study with cross-validation q(2) and conventional r(2) values of 0.779 and 0.989, respectively. The results provide the tools for predicting the affinity of related compounds, and for guiding the design and synthesis of new HPPD ligands with predetermined affinities.
Resumo:
Treasure et al. (2004) recently proposed a new sub space-monitoring technique, based on the N4SID algorithm, within the multivariate statistical process control framework. This dynamic-monitoring method requires considerably fewer variables to be analysed when compared with dynamic principal component analysis (PCA). The contribution charts and variable reconstruction, traditionally employed for static PCA, are analysed in a dynamic context. The contribution charts and variable reconstruction may be affected by the ratio of the number of retained components to the total number of analysed variables. Particular problems arise if this ratio is large and a new reconstruction chart is introduced to overcome these. The utility of such a dynamic contribution chart and variable reconstruction is shown in a simulation and by application to industrial data from a distillation unit.
Resumo:
A Time of flight (ToF) mass spectrometer suitable in terms of sensitivity, detector response and time resolution, for application in fast transient Temporal Analysis of Products (TAP) kinetic catalyst characterization is reported. Technical difficulties associated with such application as well as the solutions implemented in terms of adaptations of the ToF apparatus are discussed. The performance of the ToF was validated and the full linearity of the specific detector over the full dynamic range was explored in order to ensure its applicability for the TAP application. The reported TAP-ToF setup is the first system that achieves the high level of sensitivity allowing monitoring of the full 0-200 AMU range simultaneously with sub-millisecond time resolution. In this new setup, the high sensitivity allows the use of low intensity pulses ensuring that transport through the reactor occurs in the Knudsen diffusion regime and that the data can, therefore, be fully analysed using the reported theoretical TAP models and data processing.
Resumo:
Cellular response to radiation damage is made by a complex network of pathways and feedback loops whose spatiotemporal organization is still unclear despite its decisive role in determining the fate of the damaged cell. The single-cell approach and the high spatial resolution offered by microbeams provide the perfect tool to study and quantify the dynamic processes associated with the induction and repair of DNA damage. The soft X-ray microbeam has been used to follow the development of radiation induced foci in live cells by monitoring their size and intensity as a function of dose and time using yellow fluorescent protein (YFP) tagging techniques. Preliminary data indicate a delayed and linear rising of the intensity signal indicating a slow kinetic for the accumulation of DNA repair protein 53BP1. A slow and limited foci diffusion has also been observed. Further investigations are required to assess whatever such diffusion is consistent with a random walk pattern or if it is the result of a more structured lesion processing phenomenon. In conclusion, our data indicates that the use of microbeams coupled to live cell microscopy represent a sophisticated approach for visualizing and quantifying the dynamics changes of DNA proteins at the damaged sites.
Resumo:
This paper describes the use of the Euler equations for the generation and testing of tabular aerodynamic models for flight dynamics analysis. Maneuvers for the AGARD Standard Dynamics Model sharp leading-edge wind-tunnel geometry are considered as a test case. Wind-tunnel data is first used to validate the prediction of static and dynamic coefficients at both low and high angles, featuring complex vortical flow, with good agreement obtained at low to moderate angles of attack. Then the generation of aerodynamic tables is described based on a data fusion approach. Time-optimal maneuvers are generated based on these tables, including level flight trim, pull-ups at constant and varying incidence, and level and 90 degrees turns. The maneuver definition includes the aircraft states and also the control deflections to achieve the motion. The main point of the paper is then to assess the validity of the aerodynamic tables which were used to define the maneuvers. This is done by replaying them, including the control surface motions, through the time accurate computational fluid dynamics code. The resulting forces and moments are compared with the tabular values to assess the presence of inadequately modeled dynamic or unsteady effects. The agreement between the tables and the replay is demonstrated for slow maneuvers. Increasing rate maneuvers show discrepancies which are ascribed to vortical flow hysteresis at the higher rate motions. The framework is suitable for application to more complex viscous flow models, and is powerful for the assessment of the validity of aerodynamics models of the type currently used for studies of flight dynamics.
Resumo:
Voice over IP (VoIP) has experienced a tremendous growth over the last few years and is now widely used among the population and for business purposes. The security of such VoIP systems is often assumed, creating a false sense of privacy. This paper investigates in detail the leakage of information from Skype, a widely used and protected VoIP application. Experiments have shown that isolated phonemes can be classified and given sentences identified. By using the dynamic time warping (DTW) algorithm, frequently used in speech processing, an accuracy of 60% can be reached. The results can be further improved by choosing specific training data and reach an accuracy of 83% under specific conditions. The initial results being speaker dependent, an approach involving the Kalman filter is proposed to extract the kernel of all training signals.
Resumo:
Title. A concept analysis of renal supportive care: the changing world of nephrology
Aim. This paper is a report of a concept analysis of renal supportive care.
Background. Approximately 1.5 million people worldwide are kept alive by renal dialysis. As services are required to support patients who decide not to start or to withdraw from dialysis, the term renal supportive care is emerging. Being similar to the terms palliative care, end-of-life care, terminal care and conservative management, there is a need for conceptual clarity.
Method. Rodgers' evolutionary method was used as the organizing framework for this concept analysis. Data were collected from a review of CINAHL, Medline, PsycINFO, British Nursing Index, International Bibliography of the Social Sciences and ASSIA (1806-2006) using, 'renal' and 'supportive care' as keywords. All articles with an abstract were considered. The World Wide Web was also searched in English utilizing the phrase 'renal supportive care'.
Results. Five attributes of renal supportive care were identified: available from diagnosis to death with an emphasis on honesty regarding prognosis and impact of disease; interdisciplinary approach to care; restorative care; family and carer support and effective, lucid communication to ensure informed choice and clear lines of decision-making.
Conclusion. Renal supportive care is a dynamic and emerging concept relevant, but not limited to, the end phase of life. It suggests a central philosophy underpinning renal service development that allows patients, carers and the multidisciplinary team time to work together to realize complex goals. It has relevance for the renal community and is likely to be integrated increasingly into everyday nephrology practice.
Resumo:
This paper introduces a novel channel inversion (CI) precoding scheme for the downlink of phase shift keying (PSK)-based multiple input multiple output (MIMO) systems. In contrast to common practice where knowledge of the interference is used to eliminate it, the main idea proposed here is to use this knowledge to glean benefit from the interference. It will be shown that the system performance can be enhanced by exploiting some of the existent inter-channel interference (ICI). This is achieved by applying partial channel inversion such that the constructive part of ICI is preserved and exploited while the destructive part is eliminated by means of CI precoding. By doing so, the effective signal to interference-plus-noise ratio (SINR) delivered to the mobile unit (MU) receivers is enhanced without the need to invest additional transmitted signal power at the MIMO base station (BS). It is shown that the trade-off to this benefit is a minor increase in the complexity of the BS processing. The presented theoretical analysis and simulations demonstrate that due to the SINR enhancement, significant performance and throughput gains are offered by the proposed MIMO precoding technique compared to its conventional counterparts.
Absorbing new knowledge in small and medium-sized enterprises: A multiple case analysis of Six Sigma
Resumo:
The primary aim of this article is to critically analyse the development of Six Sigma theory and practice within small and medium-sized enterprises (SMEs) using a multiple case study approach. The article also explores the subsequent development of Lean Six Sigma as a means of addressing the perceived limitations of the efficacy of Six Sigma in this context. The overarching theoretical framework is that of absorptive capacity, where Six Sigma is conceptualized as new knowledge to be absorbed by smaller firms. The findings from a multiple case study involving repeat interviews and focus groups informed the development of an analytical model demonstrating the dynamic underlying routines for the absorptive capacity process and the development of a number of summative propositions relating the characteristics of SMEs to Six Sigma and Lean Six Sigma implementation.
Resumo:
The control and coordination of a network of geographically and culturally dispersed subsidiaries is one of the most prominent challenges in international management. However, many empirical findings on the effectiveness of various control mechanisms and combinations thereof are still counterintuitive. This study uses longitudinal case studies and cross-sectional interview data to extend control theory by examining why, how, and in what sequence large multinational firms (MNCs) implement controls in their networks of foreign subsidiaries. Our analysis draws from literature on institutional theory, embeddedness, and organizational power to demonstrate that MNC headquarters need to overcome institutional duality when implementing their controls abroad. We find that headquarters do so by using social controls, primarily as a way of legitimizing and institutionalizing their process and output controls that are implemented subsequently.
Resumo:
Designing satellite structures poses an ongoing challenge as the interaction between analysis, experimental testing, and manufacturing phases is underdeveloped. Finite Element Analysis for Satellite Structures: Applications to Their Design, Manufacture and Testing explains the theoretical and practical knowledge needed to perform design of satellite structures. By layering detailed practical discussions with fully developed examples, Finite Element Analysis for Satellite Structures: Applications to Their Design, Manufacture and Testing provides the missing link between theory and implementation.
Computational examples cover all the major aspects of advanced analysis; including modal analysis, harmonic analysis, mechanical and thermal fatigue analysis using finite element method. Test cases are included to support explanations an a range of different manufacturing simulation techniques are described from riveting to shot peening to material cutting. Mechanical design of a satellites structures are covered in three steps: analysis step under design loads, experimental testing to verify design, and manufacturing.
Stress engineers, lecturers, researchers and students will find Finite Element Analysis for Satellite Structures: Applications to Their Design, Manufacture and Testing a key guide on with practical instruction on applying manufacturing simulations to improve their design and reduce project cost, how to prepare static and dynamic test specifications, and how to use finite element method to investigate in more details any component that may fail during testing.