963 resultados para Reliability level
Resumo:
A three-level common-mode voltage eliminated inverter with single dc supply using flying capacitor inverter and cascaded H-bridge has been proposed in this paper. The three phase space vector polygon formed by this configuration and the polygon formed by the common-mode eliminated states have been discussed. The entire system is simulated in Simulink and the results are experimentally verified. This system has an advantage that if one of devices in the H-bridge fails, the system can still be operated as a normal three-level inverter at full power. This inverter has many other advantages like use of single dc supply, making it possible for a back-to-back grid-tied converter application, improved reliability, etc.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
26 p.
Resumo:
Current design codes for floating offshore structures are based on measures of short-term reliability. That is, a design storm is selected via an extreme value analysis of the environmental conditions and the reliability of the vessel in that design storm is computed. Although this approach yields valuable information on the vessel motions, it does not produce a statistically rigorous assessment of the lifetime probability of failure. An alternative approach is to perform a long-term reliability analysis in which consideration is taken of all sea states potentially encountered by the vessel during the design life. Although permitted as a design approach in current design codes, the associated computational expense generally prevents its use in practice. A new efficient approach to long-term reliability analysis is presented here, the results of which are compared with a traditional short-term analysis for the surge motion of a representative moored FPSO in head seas. This serves to illustrate the failure probabilities actually embedded within current design code methods, and the way in which design methods might be adapted to achieve a specified target safety level.
Resumo:
OBJECTIVE: Strict lifelong compliance to a gluten-free diet (GFD) minimizes the long-term risk of mortality, especially from lymphoma, in adult celiac disease (CD). Although serum IgA antitransglutaminase (IgA-tTG-ab), like antiendomysium (IgA-EMA) antibodies, are sensitive and specific screening tests for untreated CD, their reliability as predictors of strict compliance to and dietary transgressions from a GFD is not precisely known. We aimed to address this question in consecutively treated adult celiacs. METHODS: In a cross-sectional study, 95 non-IgA deficient adult (median age: 41 yr) celiacs on a GFD for at least 1 yr (median: 6 yr) were subjected to 1) a dietician-administered inquiry to pinpoint and quantify the number and levels of transgressions (classified as moderate or large, using as a cutoff value the median gluten amount ingested in the overall noncompliant patients of the series) over the previous 2 months, 2) a search for IgA-tTG-ab and -EMA, and 3) perendoscopic duodenal biopsies. The ability of both antibodies to discriminate celiacs with and without detected transgressions was described using receiver operating characteristic curves and quantified as to sensitivity and specificity, according to the level of transgressions. RESULTS: Forty (42%) patients strictly adhered to a GFD, 55 (58%) had committed transgressions, classified as moderate (< or = 18 g of gluten/2 months; median number 6) in 27 and large (>18 g; median number 69) in 28. IgA-tTG-ab and -EMA specificity (proportion of correct recognition of strictly compliant celiacs) was 0.97 and 0.98, respectively, and sensitivity (proportion of correct recognition of overall, moderate, and large levels of transgressions) was 0.52, 0.31, and 0.77, and 0.62, 0.37, and 0.86, respectively. IgA-tTG-ab and -EMA titers were correlated (p < 0.001) to transgression levels (r = 0.560 and R = 0.631, respectively) and one to another (p < 0.001) in the whole patient population (r = 0.834, N = 84) as in the noncompliant (r = 0.915, N = 48) group. Specificity and sensitivity of IgA-tTG-ab and IgA-EMA for recognition of total villous atrophy in patients under a GFD were 0.90 and 0.91, and 0.60 and 0.73, respectively. CONCLUSIONS: In adult CD patients on a GFD, IgA-tTG-ab are poor predictors of dietary transgressions. Their negativity is a falsely secure marker of strict diet compliance.
Resumo:
This work describes the work of an investigation of the effects of solder reflow process on the reliability of anisotropic conductive film (ACF) interconnection for flip-chip on flex (FCOF) applications. Experiments as well as computer modeling methods have been used. The results show that the contact resistance of ACF interconnections increases after the reflow and the magnitude of the increase is strongly correlated to the peak reflow temperature. In fact, nearly 40 percent of the joints are open when the peak reflow temperature is 260°C, while there is no opening when the peak temperature is 210°C. It is believed that the coefficient of thermal expansion (CTE) mismatch between the polymer particle and the adhesive matrix is the main cause of this contact degradation. To understand this phenomenon better, a three-dimensional (3-D) finite element (FE) model of an ACF joint has been analyzed in order to predict the stress distribution in the conductive particles, adhesive matrix and metal pads during the reflow process. The stress level at the interface between the particle and its surrounding materials is significant and it is the highest at the interface between the particle and the adhesive matrix.
Resumo:
Design for manufacture of system-in-package (SiP) structures is dependent on a number of physical processes that affect the final quality of the package in terms of its performance and reliability. Solder joints are key structures in a SiP and their behavior can be the critical factor in terms of reliability. This paper discusses the results from a research programme on design for manufacturing of system in package (SiP) technologies. The focus of the paper is on thermo-mechanical modelling of solder joints. This includes the behavior of the joints during testing plus some important insights into the reflow process and how physical phenomena taking place at the assembly stage can affect solder joint behavior. Finite element analysis of a numerical model of an SiP structure with various design parameters is discussed. The goal of this analysis is to identify the most promising combination of design parameters which guarantee longer lifetime of the solder joints and hence the SiP component. The parameters that were studied are the size of the package (i.e. number of solder joints per row), the presence of the underfill and/or the reinforcement as well as the thickness of the passive die. Discussion was also provided on phenomena that take place during the reflow process where the solder joints are formed. In particular, the formation of intermetallics at the solder-pad interfaces
Resumo:
Light has the greatest information carrying potential of all the perceivable interconnect mediums; consequently, optical fiber interconnects rapidly replaced copper in telecommunications networks, providing bandwidth capacity far in excess of its predecessors. As a result the modern telecommunications infrastructure has evolved into a global mesh of optical networks with VCSEL’s (Vertical Cavity Surface Emitting Lasers) dominating the short-link markets, predominately due to their low-cost. This cost benefit of VCSELs has allowed optical interconnects to again replace bandwidth limited copper as bottlenecks appear on VSR (Very Short Reach) interconnects between co-located equipment inside the CO (Central-Office). Spurred by the successful deployment in the VSR domain and in response to both intra-board backplane applications and inter-board requirements to extend the bandwidth between IC’s (Integrated Circuits), current research is migrating optical links toward board level USR (Ultra Short Reach) interconnects. Whilst reconfigurable Free Space Optical Interconnect (FSOI) are an option, they are complicated by precise line-of-sight alignment conditions hence benefits exist in developing guided wave technologies, which have been classified into three generations. First and second generation technologies are based upon optical fibers and are both capable of providing a suitable platform for intra-board applications. However, to allow component assembly, an integral requirement for inter-board applications, 3rd generation Opto-Electrical Circuit Boards (OECB’s) containing embedded waveguides are desirable. Currently, the greatest challenge preventing the deployment of OECB’s is achieving the out-of-plane coupling to SMT devices. With the most suitable low-cost platform being to integrate the optics into the OECB manufacturing process, several research avenues are being explored although none to date have demonstrated sufficient coupling performance. Once in place, the OECB assemblies will generate new reliability issues such as assembly configurations, manufacturing tolerances, and hermetic requirements that will also require development before total off-chip photonic interconnection can truly be achieved
Resumo:
This paper discusses a reliability based optimisation modelling approach demonstrated for the design of a SiP structure integrated by stacking dies one upon the other. In this investigation the focus is on the strategy for handling the uncertainties in the package design inputs and their implementation into the design optimisation modelling framework. The analysis of fhermo-mechanical behaviour of the package is utilised to predict the fatigue life-time of the lead-free board level solder interconnects and warpage of the package under thermal cycling. The SiP characterisation is obtained through the exploitation of Reduced Order Models (ROM) constructed using high fidelity analysis and Design of Experiments (DoE) methods. The design task is to identify the optimal SiP design specification by varying several package input parameters so that a specified target reliability of the solder joints is achieved and in the same time design requirements and package performance criteria are met
Resumo:
Objective: To evaluate the psychometric performance of the Child Health Questionnaire (CHQ) in children with cerebral palsy (CP).
Method: 818 parents of children with CP, aged 8–12 from nine regions of Europe completed the CHQ (parent form 50 items). Functional abilities were classified using the five-level Gross Motor Function Classification Scheme (Levels I–III as ambulant; Level IV–V as nonambulant CP).
Results: Ceiling effects were observed for a number of subscales and summary scores across all Gross Motor Function Classification System levels, whilst floor effects occurred only in the physical functioning scale (Level V CP). Reliability was satisfactory overall. Confirmatory factor analysis (CFA) revealed a seven-factor structure for the total sample of children with CP but with different factor structures for ambulant and nonambulant children.
Conclusion: The CHQ has limited applicability in children with CP, although with judicious use of certain domains for ambulant and nonambulant children can provide useful and comparable data about child health status for descriptive purposes.
Resumo:
The purpose of this experiment was to assess the test-retest reliability of input-output parameters of the cortico-spinal pathway derived from transcranial magnetic (TMS) and electrical (TES) stimulation at rest and during muscle contraction. Motor evoked potentials (MEPs) were recorded from the first dorsal interosseous muscle of eight individuals on three separate days. The intensity of TMS at rest was varied from 5% below threshold to the maximal output of the stimulator. During trials in which the muscle was active, TMS and TES intensities were selected that elicited MEPs of between 150 and 300 X at rest. MEPs were evoked while the participants exerted torques up to 50% of their maximum capacity. The relationship between MEP size and stimulus intensity at rest was sigmoidal (R-2 = 0.97). Intra-class correlation coefficients (ICC) ranged between 0.47 and 0.81 for the parameters of the sigmoid function. For the active trials, the slope and intercept of regression equations of MEP size on level of background contraction were obtained more reliably for TES (ICC = 0.63 and 0.78, respectively) than for TMS (ICC = 0.50 and 0.53, respectively), These results suggest that input-output parameters of the cortico-spinal pathway may be reliably obtained via transcranial stimulation during longitudinal investigations of cortico-spinal plasticity. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Institutional and economic development has recently returned to the forefront of economic analysis. The use of case studies (both historical and contemporary) has been important in this revival. Likewise, it has been argued recently by economic methodologists that historical context provides a kind of ‘‘laboratory’’ for the researcher interested in real world economic phenomena. Counterterrorism economics, in contrast with much of the rest of the literature on terrorism, has all too rarely drawn upon detailed contextual case studies. This article seeks to help remedy this problem. Archival evidence, including previously unpublished material on the DeLorean case, is an important feature of this article. The article examines how an inter-related strategy, which traded-off economic, security, and political considerations, operated during the Troubles. Economic repercussions of this strategy are discussed. An economic analysis of technical and organizational change within paramilitarism is also presented. A number of institutional lessons are discussed including: the optimal balance between carrot versus stick, centralization relative to decentralization, the economics of intelligence operations, and tit-for-tat violence. While existing economic models are arguably correct in identifying benefits from politico-economic decentralization, they downplay the element highlighted by institutional analysis.
Resumo:
This paper considers a non-cooperative network formation game where identity is introduced as a single dimension to capture the characteristics of a player in the network. Players access to the benefits from the link through direct and indirect connections. We consider cases where cost of link formation paid by the initiator. Each player is allowed to choose their commitment level to their identities. The cost of link formation decreases as the players forming the link share the same identity and higher commitment levels. We then introduce link imperfections to the model. We characterize the Nash networks and we find that the set of Nash networks are either singletons with no links formed or separated blocks or components with mixed blocks or connected.
Resumo:
The Arc-Length Method is a solution procedure that enables a generic non-linear problem to pass limit points. Some examples are provided of mode-jumping problems solutions using a commercial nite element package, and other investigations are carried out on a simple structure of which the numerical solution can be compared with an analytical one. It is shown that Arc-Length Method is not reliable when bifurcations are present in the primary equilibrium path; also the presence of very sharp snap-backs or special boundary conditions may cause convergence diÆculty at limit points. An improvement to the predictor used in the incremental procedure is suggested, together with a reliable criteria for selecting either solution of the quadratic arc-length constraint. The gap that is sometimes observed between the experimantal load level of mode-jumping and its arc-length prediction is explained through an example.
Resumo:
The aims of this study were to 1) determine the relationship between performance on the court-based TIVRE-Basket® test and peak aerobic power determined from a criterion lab-based incremental treadmill test and 2) to examine the test-retest reliability of the TIVRE-Basket® test in elite male basketball players. To address aim 1, 36 elite male basketball players (age 25.2 + 4.7 years, weight 94.1 + 11.4 kg, height 195.83 + 9.6 cm) completed a graded treadmill exercise test and the TIVRE-Basket® within 72 hours. Mean distance recorded during the TIVRE-Basket® test was 4001.8 + 176.4m, and mean VO2 peak was 54.7 + 2.8 ml.kg.min-1, and the correlation between the two parameters was r=0.824 (P= <0.001). Linear regression analysis identified TIVRE-Basket® distance (m) as the only unique predictor of VO2 peak in a single variable plus constant model: VO2 peak = 2.595 + ((0.13* TIVRE-Basket® distance (m)). Performance on the TIVRE-Basket® test accounted for 67.8% of the variance in VO2 peak (t=8.466, P=<.001, 95% CI 0.01 - 0.016, SEE 1.61). To address aim 2, 20 male basketball players (age 26.7±4.2; height 1.94±0.92; weight 94.0±9.1) performed the TIVRE-Basket® test on two occasions. There was no significant difference in total distance covered between Trial 1 (4138.8 + 677.3m) and Trial 2 (4188.0 + 648.8m; t = 0.5798, P = 0.5688). Mean difference between trials was 49.2 + 399.5m, with an ICC of 0.85 suggesting a moderate level of reliability. Standardised TEM was 0.88%, representing a moderate degree of trial to trial error, and the CV was 6.3%. The TIVRE-Basket® test therefore represents a valid and moderately reliable court-based sport-specific test of aerobic power for use with individuals and teams of elite level male basketball players. Future research is required to ascertain its validity and reliability in other basketball populations e.g. across age groups, at different levels of competition, in females and in different forms of the game e.g. wheelchair basketball.