999 resultados para Design decay


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues dans le code. Lorsque ces logiciels évoluent, leurs architectures ont tendance à se dégrader avec le temps et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. Elles deviennent plus complexes et plus difficiles à maintenir. Dans certains cas, les développeurs préfèrent refaire la conception de ces architectures à partir du zéro plutôt que de prolonger la durée de leurs vies, ce qui engendre une augmentation importante des coûts de développement et de maintenance. Par conséquent, les développeurs doivent comprendre les facteurs qui conduisent à la dégradation des architectures, pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent leur dégradation. La dégradation des architectures se produit lorsque des développeurs qui ne comprennent pas la conception originale du logiciel apportent des changements au logiciel. D'une part, faire des changements sans comprendre leurs impacts peut conduire à l'introduction de bogues et à la retraite prématurée du logiciel. D'autre part, les développeurs qui manquent de connaissances et–ou d'expérience dans la résolution d'un problème de conception peuvent introduire des défauts de conception. Ces défauts ont pour conséquence de rendre les logiciels plus difficiles à maintenir et évoluer. Par conséquent, les développeurs ont besoin de mécanismes pour comprendre l'impact d'un changement sur le reste du logiciel et d'outils pour détecter les défauts de conception afin de les corriger. Dans le cadre de cette thèse, nous proposons trois principales contributions. La première contribution concerne l'évaluation de la dégradation des architectures logicielles. Cette évaluation consiste à utiliser une technique d’appariement de diagrammes, tels que les diagrammes de classes, pour identifier les changements structurels entre plusieurs versions d'une architecture logicielle. Cette étape nécessite l'identification des renommages de classes. Par conséquent, la première étape de notre approche consiste à identifier les renommages de classes durant l'évolution de l'architecture logicielle. Ensuite, la deuxième étape consiste à faire l'appariement de plusieurs versions d'une architecture pour identifier ses parties stables et celles qui sont en dégradation. Nous proposons des algorithmes de bit-vecteur et de clustering pour analyser la correspondance entre plusieurs versions d'une architecture. La troisième étape consiste à mesurer la dégradation de l'architecture durant l'évolution du logiciel. Nous proposons un ensemble de m´etriques sur les parties stables du logiciel, pour évaluer cette dégradation. La deuxième contribution est liée à l'analyse de l'impact des changements dans un logiciel. Dans ce contexte, nous présentons une nouvelle métaphore inspirée de la séismologie pour identifier l'impact des changements. Notre approche considère un changement à une classe comme un tremblement de terre qui se propage dans le logiciel à travers une longue chaîne de classes intermédiaires. Notre approche combine l'analyse de dépendances structurelles des classes et l'analyse de leur historique (les relations de co-changement) afin de mesurer l'ampleur de la propagation du changement dans le logiciel, i.e., comment un changement se propage à partir de la classe modifiée è d'autres classes du logiciel. La troisième contribution concerne la détection des défauts de conception. Nous proposons une métaphore inspirée du système immunitaire naturel. Comme toute créature vivante, la conception de systèmes est exposée aux maladies, qui sont des défauts de conception. Les approches de détection sont des mécanismes de défense pour les conception des systèmes. Un système immunitaire naturel peut détecter des pathogènes similaires avec une bonne précision. Cette bonne précision a inspiré une famille d'algorithmes de classification, appelés systèmes immunitaires artificiels (AIS), que nous utilisions pour détecter les défauts de conception. Les différentes contributions ont été évaluées sur des logiciels libres orientés objets et les résultats obtenus nous permettent de formuler les conclusions suivantes: • Les métriques Tunnel Triplets Metric (TTM) et Common Triplets Metric (CTM), fournissent aux développeurs de bons indices sur la dégradation de l'architecture. La d´ecroissance de TTM indique que la conception originale de l'architecture s’est dégradée. La stabilité de TTM indique la stabilité de la conception originale, ce qui signifie que le système est adapté aux nouvelles spécifications des utilisateurs. • La séismologie est une métaphore intéressante pour l'analyse de l'impact des changements. En effet, les changements se propagent dans les systèmes comme les tremblements de terre. L'impact d'un changement est plus important autour de la classe qui change et diminue progressivement avec la distance à cette classe. Notre approche aide les développeurs à identifier l'impact d'un changement. • Le système immunitaire est une métaphore intéressante pour la détection des défauts de conception. Les résultats des expériences ont montré que la précision et le rappel de notre approche sont comparables ou supérieurs à ceux des approches existantes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays, one of the most important concerns for many companies is to maintain the operation of their systems without sudden equipment break down. Because of this, new techniques for fault detection and location in mechanical systems subject to dynamic loads have been developed. This paper studies of the influence of the decay rate in the design of state observers using LMI for fault detection in mechanical systems. This influence is analyzed by the performance index proposed by Huh and Stein for the condition of a state observer. An example is presented to illustrate the methodology discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fire resistance rating of light gauge steel frame (LSF) wall systems is obtained from fire tests based on the standard fire time-temperature curve. However, fire severity has increased in modern buildings due to higher fuel loads as a result of modern furniture and light weight constructions that make use of thermoplastics materials, synthetic foams and fabrics. Some of these materials are high in calorific values and increase both the spread of fire growth and heat release rate, thus increasing the fire severity beyond that of the standard fire curve. Further, the standard fire curve does not include a decay phase that is present in natural fires. Despite the increasing usage of LSF walls, their behaviour in real building fires is not fully understood. This paper presents the details of a research study aimed at developing realistic design fire curves for use in the fire tests of LSF walls. It includes a review of the characteristics of building fires, previously developed fire time-temperature curves, computer models and available parametric equations. The paper highlights that real building fire time-temperature curves depend on the fuel load representing the combustible building contents, ventilation openings and thermal properties of wall lining materials, and provides suitable values of many required parameters including fuel loads in residential buildings. Finally, realistic design fire time-temperature curves simulating the fire conditions in modern residential buildings are proposed for the testing of LSF walls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most important factors that affect the pointing of precision payloads and devices in space platforms is the vibration generated due to static and dynamic unbalanced forces of rotary equipments placed in the neighborhood of payload. Generally, such disturbances are of low amplitude, less than 1 kHz, and are termed as ‘micro-vibrations’. Due to low damping in the space structure, these vibrations have long decay time and they degrade the performance of payload. This paper addresses the design, modeling and analysis of a low frequency space frame platform for passive and active attenuation of micro-vibrations. This flexible platform has been designed to act as a mount for devices like reaction wheels, and consists of four folded continuous beams arranged in three dimensions. Frequency and response analysis have been carried out by varying the number of folds, and thickness of vertical beam. Results show that lower frequencies can be achieved by increasing the number of folds and by decreasing the thickness of the blade. In addition, active vibration control is studied by incorporating piezoelectric actuators and sensors in the dynamic model. It is shown using simulation that a control strategy using optimal control is effective for vibration suppression under a wide variety of loading conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the design, construction and performance of a high-pressure, xenon, gas time projection chamber (TPC) for the study of double beta decay in ^(136) Xe. The TPC when operating at 5 atm can accommodate 28 moles of 60% enriched ^(136) Xe. The TPC has operated as a detector at Caltech since 1986. It is capable of reconstructing a charged particle trajectory and can easily distinguish between different kinds of charged particles. A gas purification and xenon gas recovery system were developed. The electronics for the 338 channels of readout was developed along with a data acquistion system. Currently, the detector is being prepared at the University of Neuchatel for installation in the low background laboratory situated in the St. Gotthard tunnel, Switzerland. In one year of runtime the detector should be sensitive to a 0ν lifetime of the order of 10^(24) y, which corresponds to a neutrino mass in the range 0.3 to 3.3 eV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model tests for global design verification of deepwater floating structures cannot be made at reasonable scales. An overview of recent research efforts to tackle this challenge is given first, introducing the concept of line truncation techniques. In such a method the upper sections of each line are modelled in detail, capturing the wave action zone and all coupling effects with the vessel. These terminate to an approximate analytical model, that aims to simulate the remainder of the line. The rationale for this is that in deep water the transverse elastic waves of a line are likely to decay before they are reflected at the seabed. The focus of this paper is the verification of this rationale and the ongoing work, which is considering ways to produce a truncation model. Transverse dynamics of a mooring line are modelled using the equations of motion of an inextensible taut string, submerged in still water, one end fixed at the bottom the other assumed to follow the vessel response, which can be harmonic or random. Nonlinear hydrodynamic damping is included; bending and VIV effects are neglected. A dimensional analysis, supported by exact benchmark numerical solutions, has shown that it is possible to produce a universal curve for the decay of transverse vibrations along the line, which is suitable for any kind of line with any top motion. This has a significant engineering benefit, allowing for a rapid assessment of line dynamics - it is very useful in deciding whether a truncated line model is appropriate, and if so, at which point truncation might be applied. Initial efforts in developing a truncated model show that a linearized numerical solution in the frequency domain matches very closely the exact benchmark. Copyright © 2011 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the effects of design parameters, such as cladding and coolant material choices, and operational phenomena, such as creep and fission product decay heat, on the tolerance of Accelerator Driven Subcritical Reactor (ADSR) fuel pin cladding to beam interruptions. This work aims to provide a greater understanding of the integration between accelerator and nuclear reactor technologies in ADSRs. The results show that an upper limit on cladding operating temperature of 550 °C is appropriate, as higher values of temperature tend to accelerate creep, leading to cladding failure much sooner than anticipated. The effect of fission product decay heat is to reduce significantly the maximum stress developed in the cladding during a beam-trip-induced transient. The potential impact of irradiation damage and the effects of the liquid metal coolant environment on the cladding are discussed. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we performed an evaluation of decay heat power of advanced, fast spectrum, lead and molten salt-cooled reactors, with flexible conversion ratio. The decay heat power was calculated using the BGCore computer code, which explicitly tracks over 1700 isotopes in the fuel throughout its burnup and subsequent decay. In the first stage, the capability of the BGCore code to accurately predict the decay heat power was verified by performing a benchmark calculation for a typical UO2 fuel in a Pressurized Water Reactor environment against the (ANSI/ANS-5.1-2005, "Decay Heat Power in Light Water Reactors," American National Standard) standard. Very good agreement (within 5%) between the two methods was obtained. Once BGCore calculation capabilities were verified, we calculated decay power for fast reactors with different coolants and conversion ratios, for which no standard procedure is currently available. Notable differences were observed for the decay power of the advanced reactor as compared with the conventional UO2 LWR. The importance of the observed differences was demonstrated by performing a simulation of a Station Blackout transient with the RELAP5 computer code for a lead-cooled fast reactor. The simulation was performed twice: using the code-default ANS-79 decay heat curve and using the curve calculated specifically for the studied core by BGCore code. The differences in the decay heat power resulted in failure to meet maximum cladding temperature limit criteria by ∼100 °C in the latter case, while in the transient simulation with the ANS-79 decay heat curve, all safety limits were satisfied. The results of this study show that the design of new reactor safety systems must be based on decay power curves specific to each individual case in order to assure the desired performance of these systems. © 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing interest in innovative reactors and advanced fuel cycle designs requires more accurate prediction of various transuranic actinide concentrations during irradiation or following discharge because of their effect on reactivity or spent-fuel emissions, such as gamma and neutron activity and decay heat. In this respect, many of the important actinides originate from the 241Am(n,γ) reaction, which leads to either the ground or the metastable state of 242Am. The branching ratio for this reaction depends on the incident neutron energy and has very large uncertainty in the current evaluated nuclear data files. This study examines the effect of accounting for the energy dependence of the 241Am(n,γ) reaction branching ratio calculated from different evaluated data files for different reactor and fuel types on the reactivity and concentrations of some important actinides. The results of the study confirm that the uncertainty in knowing the 241Am(n,γ) reaction branching ratio has a negligible effect on the characteristics of conventional light water reactor fuel. However, in advanced reactors with large loadings of actinides in general, and 241Am in particular, the branching ratio data calculated from the different data files may lead to significant differences in the prediction of the fuel criticality and isotopic composition. Moreover, it was found that neutron energy spectrum weighting of the branching ratio in each analyzed case is particularly important and may result in up to a factor of 2 difference in the branching ratio value. Currently, most of the neutronic codes have a single branching ratio value in their data libraries, which is sometimes difficult or impossible to update in accordance with the neutron spectrum shape for the analyzed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 7 Tesla superconducting magnet with a clear warm bore of 156 mm in diameter has been developed for Lanzhou Penning Trap at the Institute of Modern Physics for high precision mass measurement. The magnet is comprised of 9 solenoid coils and operates in persistent mode with a total energy of 2.3 MJ. Due to the considerable amount of energy stored during persistent mode operation, the quench protection system is very important when designing and operating the magnet. A passive protection system based on a subdivided scheme is adopted to protect the superconducting magnet from damage caused by quenching. Cold diodes and resistors are put across the subdivision to reduce both the voltage and temperature hot spots. Computational simulations have been carried in Opera-quench. The designed quench protection circuit and the finite element method model for quench simulations are described; the time changing of temperature, voltage and current decay during the quench process is also analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper details the implementation and operational performance of a minimum-power 2.45-GHz pulse receiver and a companion on-off keyed transmitter for use in a semi-active duplex RF biomedical transponder. A 50-Ohm microstrip stub-matched zero-bias diode detector forms the heart of a body-worn receiver that has a CMOS baseband amplifier consuming 20 microamps from +3 V and achieves a tangential sensitivity of -53 dBm. The base transmitter generates 0.5 W of peak RF output power into 50 Ohms. Both linear and right-hand circularly polarized Tx-Rx antenna sets were employed in system reliability trials carried out in a hospital Coronary Care Unit, For transmitting antenna heights between 0.3 and 2.2 m above floor level, transponder interrogations were 95% reliable within the 67-m-sq area of the ward, falling to an average of 46 % in the surrounding rooms and corridors. Overall, the circular antenna set gave the higher reliability and lower propagation power decay index.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis Entitled phenylethynylarene based Donor-Acceptor systems:Desigh,Synthesis and Photophysical studies. A strategy for the design of donor-acceptor dyads, wherein decay of the charge separated (CS) state to low lying local triplet levels could possibly be prevented, is proposed. In order to examine this strategy, a linked donor-acceptor dyad BPEPPT with bis(phenylethYlly/)pyrene (BPEP) as the light absorber and acceptor and phenothiazine (PT) as donor was designed and photoinduced electron transfer in the dyad investigated. Absorption spectra of the dyad can be obtained by adding contributions due 10 the BPEP and PT moieties indicating that the constituents do not interact in the ground stale. Fluorescence of the BPEP moiety was efficiently quenched by the PT donor and this was attributed to electron lransfer from PT to BPEP. Picosecond transient absorption studies suggested formation of a charge separated state directly from the singlet excited state of BPEP. Nanosecond flash photolysis experiments gave long-ived transient absorptions assignable to PT radical cation and BPEP radical anion. These assignments were confirmed by oxygen quenching studies and secondary electron transfer experiments. Based on the available data, energy level diagram for BPEP-PT was constructed. The long lifetime of the charge separated state was attributed to the inverted region effects. The CS state did not undergo decay to low lying BPEP triplet indicating the success of our strategy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.