986 resultados para Equivalent-circuit model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

J/psi photoproduction is studied in the framework of the analytic S-matrix theory. The differential and integrated elastic cross sections for J/psi photoproduction are calculated from a dual amplitude with Mandelstam analyticity. It is argued that, at low energies, the background, which is the low-energy equivalent of the high-energy diffraction, replaces the Pomeron exchange. The onset of the high-energy Pomeron dominance is estimated from the fits to the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Walk-in centres may improve access to healthcare for some patients, due to their convenient location and extensive opening hours, with no need for an appointment. Herein, we describe and assess a new model of walk-in centre, characterised by care provided by residents and supervision achieved by experienced family doctors. The main aim of the study was to assess patients' satisfaction about the care they received from residents and their supervision by family doctors. The secondary aim was to describe walk-in patients' demographic characteristics and to identify potential associations with satisfaction. METHODS: The study was conducted in the walk-in centre of Lausanne. Patients who consulted between 11th and 31st April were automatically included and received a questionnaire in French. We used a five-point Likert scale, ranging from "not at all satisfied" to "very satisfied", converted from values of 1 to 5. We focused on the satisfaction regarding residents' care and supervision by a family doctor. The former was divided in three categories: "Skills", "Treatment" and "Behaviour". A mean satisfaction score was calculated for each category and a multivariable logistic model was applied in order to identify associations with patients' demographics. RESULTS: The overall response rate was 47% [184/395]. Walk-in patients were more likely to be women (62%), young (median age 31), with a high education level (40% of University degree or equivalent). Patients were "very satisfied" with residents' care, with a median satisfaction score between 4.5 and 5, for each category. Over 90% of patients were "satisfied" or "very satisfied" that a family doctor was involved in the consultation. Age showed the greatest association with satisfaction. CONCLUSION: Patients were highly satisfied with care provided by residents and with the involvement of a family doctor in the consultation. Older age showed the greatest positive association with satisfaction with a positive impact. The high level satisfaction reported by walk-in patients supports this new model of walk-in centre.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Standard cardiopulmonary bypass (CPB) circuits with their large surface area and volume contribute to postoperative systemic inflammatory reaction and hemodilution. In order to minimize these problems a new approach has been developed resulting in a single disposable, compact arterio-venous loop, which has integral kinetic-assist pumping, oxygenating, air removal, and gross filtration capabilities (CardioVention Inc., Santa Clara, CA, USA). The impact of this system on gas exchange capacity, blood elements and hemolysis is compared to that of a conventional circuit in a model of prolonged perfusion. METHODS: Twelve calves (mean body weight: 72.2+/-3.7 kg) were placed on cardiopulmonary bypass for 6 h with a flow of 5 l/min, and randomly assigned to the CardioVention system (n=6) or a standard CPB circuit (n=6). A standard battery of blood samples was taken before bypass and throughout bypass. Analysis of variance was used for comparison. RESULTS: The hematocrit remained stable throughout the experiment in the CardioVention group, whereas it dropped in the standard group in the early phase of perfusion. When normalized for prebypass values, both profiles differed significantly (P<0.01). Both O2 and CO2 transfers were significantly improved in the CardioVention group (P=0.04 and P<0.001, respectively). There was a slightly higher pressure drop in the CardioVention group but no single value exceeded 112 mmHg. No hemolysis could be detected in either group with all free plasma Hb values below 15 mg/l. Thrombocyte count, when corrected by hematocrit and normalized by prebypass values, exhibited an increased drop in the standard group (P=0.03). CONCLUSION: The CardioVention system with its concept of limited priming volume and exposed foreign surface area, improves gas exchange probably because of the absence of detectable hemodilution, and appears to limit the decrease in the thrombocyte count which may be ascribed to the reduced surface. Despite the volume and surface constraints, no hemolysis could be detected throughout the 6 h full-flow perfusion period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes an important seismic attenuation mechanism in porous rocks. As a consequence, centimetre-scale perturbations of the rock physical properties should be taken into account for seismic modelling whenever detailed and accurate responses of specific target structures are desired, which is, however, computationally prohibitive. A convenient way to circumvent this problem is to use an upscaling procedure to replace each of the heterogeneous porous media composing the geological model by corresponding equivalent visco-elastic solids and to solve the visco-elastic equations of motion for the inferred equivalent model. While the overall qualitative validity of this procedure is well established, there are as of yet no quantitative analyses regarding the equivalence of the seismograms resulting from the original poro-elastic and the corresponding upscaled visco-elastic models. To address this issue, we compare poro-elastic and visco-elastic solutions for a range of marine-type models of increasing complexity. We found that despite the identical dispersion and attenuation behaviour of the heterogeneous poro-elastic and the equivalent visco-elastic media, the seismograms may differ substantially due to diverging boundary conditions, where there exist additional options for the poro-elastic case. In particular, we observe that at the fluid/porous-solid interface, the poro- and visco-elastic seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an important result which has potentially far-reaching implications for wave-equation-based algorithms in exploration geophysics involving fluid/porous-solid interfaces, such as, for example, wavefield decomposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous positive airway pressure, aimed at preventing pulmonary atelectasis, has been used for decades to reduce lung injury in critically ill patients. In neonatal practice, it is increasingly used worldwide as a primary form of respiratory support due to its low cost and because it reduces the need for endotracheal intubation and conventional mechanical ventilation. We studied the anesthetized in vivo rat and determined the optimal circuit design for delivery of continuous positive airway pressure. We investigated the effects of continuous positive airway pressure following lipopolysaccharide administration in the anesthetized rat. Whereas neither continuous positive airway pressure nor lipopolysaccharide alone caused lung injury, continuous positive airway pressure applied following intravenous lipopolysaccharide resulted in increased microvascular permeability, elevated cytokine protein and mRNA production, and impaired static compliance. A dose-response relationship was demonstrated whereby higher levels of continuous positive airway pressure (up to 6 cmH(2)O) caused greater lung injury. Lung injury was attenuated by pretreatment with dexamethasone. These data demonstrate that despite optimal circuit design, continuous positive airway pressure causes significant lung injury (proportional to the airway pressure) in the setting of circulating lipopolysaccharide. Although we would currently avoid direct extrapolation of these findings to clinical practice, we believe that in the context of increasing clinical use, these data are grounds for concern and warrant further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study focuses on two effects of the presence of a noncondensable gas on the thermal-hydraulic behavior of thecoolant of the primary circuit of a nuclear reactor in the VVER-440 geometry inabnormal situations. First, steam condensation with the presence of air was studied in the horizontal tubes of the steam generator (SG) of the PACTEL test facility. The French thermal-hydraulic CATHARE code was used to study the heat transfer between the primary and secondary side in conditions derived from preliminary experiments performed by VTT using PACTEL. In natural circulation and single-phase vapor conditions, the injection of a volume of air, equivalent to the totalvolume of the primary side of the SG at the entrance of the hot collector, did not stop the heat transfer from the primary to the secondary side. The calculated results indicate that air is located in the second half-length (from the mid-length of the tubes to the cold collector) in all the tubes of the steam generator The hot collector remained full of steam during the transient. Secondly, the potential release of the nitrogen gas dissolved in the water of the accumulators of the emergency core coolant system of the Loviisa nuclear power plant (NPP) was investigated. The author implemented a model of the dissolution and release ofnitrogen gas in the CATHARE code; the model created by the CATHARE developers. In collaboration with VTT, an analytical experiment was performed with some components of PACTEL to determine, in particular, the value of the release time constant of the nitrogen gas in the depressurization conditions representative of the small and intermediate break transients postulated for the Loviisa NPP. Such transients, with simplified operating procedures, were calculated using the modified CATHARE code for various values of the release time constant used in the dissolution and release model. For the small breaks, nitrogen gas is trapped in thecollectors of the SGs in rather large proportions. There, the levels oscillate until the actuation of the low-pressure injection pumps (LPIS) that refill the primary circuit. In the case of the intermediate breaks, most of the nitrogen gas is expelled at the break and almost no nitrogen gas is trapped in the SGs. In comparison with the cases calculated without taking into account the release of nitrogen gas, the start of the LPIS is delayed by between 1 and 1.75 h. Applicability of the obtained results to the real safety conditions must take into accountthe real operating procedures used in the nuclear power plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short-term synaptic depression (STD) is a form of synaptic plasticity that has a large impact on network computations. Experimental results suggest that STD is modulated by cortical activity, decreasing with activity in the network and increasing during silent states. Here, we explored different activity-modulation protocols in a biophysical network model for which the model displayed less STD when the network was active than when it was silent, in agreement with experimental results. Furthermore, we studied how trains of synaptic potentials had lesser decay during periods of activity (UP states) than during silent periods (DOWN states), providing new experimental predictions. We next tackled the inverse question of what is the impact of modifying STD parameters on the emergent activity of the network, a question difficult to answer experimentally. We found that synaptic depression of cortical connections had a critical role to determine the regime of rhythmic cortical activity. While low STD resulted in an emergent rhythmic activity with short UP states and long DOWN states, increasing STD resulted in longer and more frequent UP states interleaved with short silent periods. A still higher synaptic depression set the network into a non-oscillatory firing regime where DOWN states no longer occurred. The speed of propagation of UP states along the network was not found to be modulated by STD during the oscillatory regime; it remained relatively stable over a range of values of STD. Overall, we found that the mutual interactions between synaptic depression and ongoing network activity are critical to determine the mechanisms that modulate cortical emergent patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fact that individuals learn can change the relationship between genotype and phenotype in the population, and thus affect the evolutionary response to selection. Here we ask how male ability to learn from female response affects the evolution of a novel male behavioral courtship trait under pre-existing female preference (sensory drive). We assume a courtship trait which has both a genetic and a learned component, and a two-level female response to males. With individual-based simulations we show that, under this scenario, learning generally increases the strength of selection on the genetic component of the courtship trait, at least when the population genetic mean is still low. As a consequence, learning not only accelerates the evolution of the courtship trait, but also enables it when the trait is costly, which in the absence of learning results in an adaptive valley. Furthermore, learning can enable the evolution of the novel trait in the face of gene flow mediated by immigration of males that show superior attractiveness to females based on another, non-heritable trait. However, rather than increasing monotonically with the speed of learning, the effect of learning on evolution is maximized at intermediate learning rates. This model shows that, at least under some scenarios, the ability to learn can drive the evolution of mating behaviors through a process equivalent to Waddington's genetic assimilation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multibody simulation model of the roller test rig is presented in this work. The roller test rig consists of a paper machine’s tube roll supported with a hard bearing type balancing machine. The simulation model includes non-idealities that are measured from the physical structure. These non-idealities are the shell thickness variation of the roll and roundness errors of the shafts of the roll. These kinds of non-idealities are harmful since they can cause subharmonic resonances of the rotor system. In this case, the natural vibration mode of the rotor is excited when the rotation speed is a fraction of the natural frequency of the system. With the simulation model, the half critical resonance is studied in detail and a sensitivity analysis is performed by simulating several analyses with slightly different input parameters. The model is verified by comparing the simulation results with those obtained by measuring the real structure. Comparison shows that good accuracy is achieved, since equivalent responses are achieved within the error limit of the input parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.