57 resultados para clean and large throughput differential pumping system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comparison between an unconstrained and a partially constrained system for in vitro biomechanical testing of the L5-S1 spinal unit was conducted. The objective was to compare the compliance and the coupling of the L5-S1 unit measured with an unconstrained and a partially constrained test for the three major physiological motions of the human spine. Very few studies have compared unconstrained and partially constrained testing systems using the same cadaveric functional spinal units (FSUs). Seven human L5-S1 units were therefore tested on both a pneumatic, unconstrained, and a servohydraulic, partially constrained system. Each FSU was tested along three motions: flexion-extension (FE), lateral bending (LB) and axial rotation (AR). The obtained kinematics on both systems is not equivalent, except for the FE case, where both motions are similar. The directions of coupled motions were similar for both tests, but their magnitudes were smaller in the partially constrained configuration. The use of a partially constrained system to characterize LB and AR of the lumbosacral FSU decreased significantly the measured stiffness of the segment. The unconstrained system is today's "gold standard" for the characterization of FSUs. The selected partially constrained method seems also to be an appropriate way to characterize FSUs for specific applications. Care should be taken using the latter method when the coupled motions are important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the impact of disastrous and ‘ordinary’ floods on human societies in what is now Austria. The focus is on urban areas and their neighbourhoods. Examining institutional sources such as accounts of the bridge masters, charters, statutes and official petitions, it can be shown that city communities were well acquainted with this permanent risk: in fact, an office was established for the restoration of bridges and the maintenance of water defences and large depots for timber and water pipes ensured that the reconstruction of bridges and the system of water supply could start immediately after the floods had subsided. Carpenters and similar groups gained 10 to 20 per cent of their income from the repair of bridges and other flood damage. The construction of houses in endangered zones was adapted in order to survive the worst case experiences. Thus, we may describe those communities living along the central European rivers as ‘cultures of flood management’. This special knowledge vanished, however, from the mid-nineteenth century onwards, when river regulations gave the people a false feeling of security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, desertification research has focused on degradation assessments, whereas prevention and mitigation strategies have not sufficiently been emphasised, although the concept of sustainable land management (SLM) is increasingly being acknowledged. SLM strategies are interventions at the local to regional scale aiming at increasing productivity, protecting the natural resource base, and improving livelihoods. The global WOCAT initiative and its partners have developed harmonized frameworks to compile, evaluate and analyse the impact of SLM practices around the globe. Recent studies within the EU research project DESIRE developed a methodological framework that combines a collective learning and decision-making approach with use of best practices from the WOCAT database. In-depth assessment of 30 technologies and 8 approaches from 17 desertification sites enabled an evaluation of how SLM addresses prevalent dryland threats such as water scarcity, soil and vegetation degradation, low production, climate change, resource use conflicts and migration. Among the impacts attributed to the documented technologies, those mentioned most were diversified and enhanced production and better management of water and soil degradation, whether through water harvesting, improving soil moisture, or reducing runoff. Water harvesting offers under-exploited opportunities for the drylands and the predominantly rainfed farming systems of the developing world. Recently compiled guidelines introduce the concepts behind water harvesting and propose a harmonised classification system, followed by an assessment of suitability, adoption and up-scaling of practices. Case studies go from large-scale floodwater spreading that make alluvial plains cultivable, to systems that boost cereal production in small farms, as well as practices that collect and store water from household compounds. Once contextualized and set in appropriate institutional frameworks, they can form part of an overall adaptation strategy for land users. More field research is needed to reinforce expert assessments of SLM impacts and provide the necessary evidence-based rationale for investing in SLM. This includes developing methods to quantify and value ecosystem services, both on-site and off-site, and assess the resilience of SLM practices, as currently aimed at within the new EU CASCADE project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in Greenland accumulation and the stability in the relationship between accumulation variability and large-scale circulation are assessed by performing time-slice simulations for the present day, the preindustrial era, the early Holocene, and the Last Glacial Maximum (LGM) with a comprehensive climate model. The stability issue is an important prerequisite for reconstructions of Northern Hemisphere atmospheric circulation variability based on accumulation or precipitation proxy records from Greenland ice cores. The analysis reveals that the relationship between accumulation variability and large-scale circulation undergoes a significant seasonal cycle. As the contributions of the individual seasons to the annual signal change, annual mean accumulation variability is not necessarily related to the same atmospheric circulation patterns during the different climate states. Interestingly, within a season, local Greenland accumulation variability is indeed linked to a consistent circulation pattern, which is observed for all studied climate periods, even for the LGM. Hence, it would be possible to deduce a reliable reconstruction of seasonal atmospheric variability (e.g., for North Atlantic winters) if an accumulation or precipitation proxy were available that resolves single seasons. We further show that the simulated impacts of orbital forcing and changes in the ice sheet topography on Greenland accumulation exhibit strong spatial differences, emphasizing that accumulation records from different ice core sites regarding both interannual and long-term (centennial to millennial) variability cannot be expected to look alike since they include a distinct local signature. The only uniform signal to external forcing is the strong decrease in Greenland accumulation during glacial (LGM) conditions and an increase associated with the recent rise in greenhouse gas concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Cloud computing service emerged as an essential component of the Enterprise {IT} infrastructure. Migration towards a full range and large-scale convergence of Cloud and network services has become the current trend for addressing requirements of the Cloud environment. Our approach takes the infrastructure as a service paradigm to build converged virtual infrastructures, which allow offering tailored performance and enable multi-tenancy over a common physical infrastructure. Thanks to virtualization, new exploitation activities of the physical infrastructures may arise for both transport network and Data Centres services. This approach makes network and Data Centres’ resources dedicated to Cloud Computing to converge on the same flexible and scalable level. The work presented here is based on the automation of the virtual infrastructure provisioning service. On top of the virtual infrastructures, a coordinated operation and control of the different resources is performed with the objective of automatically tailoring connectivity services to the Cloud service dynamics. Furthermore, in order to support elasticity of the Cloud services through the optical network, dynamic re-planning features have been provided to the virtual infrastructure service, which allows scaling up or down existing virtual infrastructures to optimize resource utilisation and dynamically adapt to users’ demands. Thus, the dynamic re-planning of the service becomes key component for the coordination of Cloud and optical network resource in an optimal way in terms of resource utilisation. The presented work is complemented with a use case of the virtual infrastructure service being adopted in a distributed Enterprise Information System, that scales up and down as a function of the application requests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large body of empirical research shows that psychosocial risk factors (PSRFs) such as low socio-economic status, social isolation, stress, type-D personality, depression and anxiety increase the risk of incident coronary heart disease (CHD) and also contribute to poorer health-related quality of life (HRQoL) and prognosis in patients with established CHD. PSRFs may also act as barriers to lifestyle changes and treatment adherence and may moderate the effects of cardiac rehabilitation (CR). Furthermore, there appears to be a bidirectional interaction between PSRFs and the cardiovascular system. Stress, anxiety and depression affect the cardiovascular system through immune, neuroendocrine and behavioural pathways. In turn, CHD and its associated treatments may lead to distress in patients, including anxiety and depression. In clinical practice, PSRFs can be assessed with single-item screening questions, standardised questionnaires, or structured clinical interviews. Psychotherapy and medication can be considered to alleviate any PSRF-related symptoms and to enhance HRQoL, but the evidence for a definite beneficial effect on cardiac endpoints is inconclusive. A multimodal behavioural intervention, integrating counselling for PSRFs and coping with illness should be included within comprehensive CR. Patients with clinically significant symptoms of distress should be referred for psychological counselling or psychologically focused interventions and/or psychopharmacological treatment. To conclude, the success of CR may critically depend on the interdependence of the body and mind and this interaction needs to be reflected through the assessment and management of PSRFs in line with robust scientific evidence, by trained staff, integrated within the core CR team.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES To conduct a survey across European cardiac centres to evaluate the methods used for cerebral protection during aortic surgery involving the aortic arch. METHODS All European centres were contacted and surgeons were requested to fill out a short, comprehensive questionnaire on an internet-based platform. One-third of more than 400 contacted centres completed the survey correctly. RESULTS The most preferred site for arterial cannulation is the subclavian-axillary, both in acute and chronic presentation. The femoral artery is still frequently used in the acute condition, while the ascending aorta is a frequent second choice in the case of chronic presentation. Bilateral antegrade brain perfusion is chosen by the majority of centres (2/3 of cases), while retrograde perfusion or circulatory arrest is very seldom used and almost exclusively in acute clinical presentation. The same pumping system of the cardio pulmonary bypass is most of the time used for selective cerebral perfusion, and the perfusate temperature is usually maintained between 22 and 26°C. One-third of the centres use lower temperatures. Perfusate flow and pressure are fairly consistent among centres in the range of 10-15 ml/kg and 60 mmHg, respectively. In 60% of cases, barbiturates are added for cerebral protection, while visceral perfusion still receives little attention. Regarding cerebral monitoring, there is a general tendency to use near-infrared spectroscopy associated with bilateral radial pressure measurement. CONCLUSIONS These data represent a snapshot of the strategies used for cerebral protection during major aortic surgery in current practice, and may serve as a reference for standardization and refinement of different approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During time-resolved optical stimulation experiments (TR-OSL), one uses short light pulses to separate the stimulation and emission of luminescence in time. Experimental TR-OSL results show that the luminescence lifetime in quartz of sedimentary origin is independent of annealing temperature below 500 °C, but decreases monotonically thereafter. These results have been interpreted previously empirically on the basis of the existence of two separate luminescence centers LH and LL in quartz, each with its own distinct luminescence lifetime. Additional experimental evidence also supports the presence of a non-luminescent hole reservoir R, which plays a critical role in the predose effect in this material. This paper extends a recently published analytical model for thermal quenching in quartz, to include the two luminescence centers LH and LL, as well as the hole reservoir R. The new extended model involves localized electronic transitions between energy states within the two luminescence centers, and is described by a system of differential equations based on the Mott–Seitz mechanism of thermal quenching. It is shown that by using simplifying physical assumptions, one can obtain analytical solutions for the intensity of the light during a TR-OSL experiment carried out with previously annealed samples. These analytical expressions are found to be in good agreement with the numerical solutions of the equations. The results from the model are shown to be in quantitative agreement with published experimental data for commercially available quartz samples. Specifically the model describes the variation of the luminescence lifetimes with (a) annealing temperatures between room temperature and 900 °C, and (b) with stimulation temperatures between 20 and 200 °C. This paper also reports new radioluminescence (RL) measurements carried out using the same commercially available quartz samples. Gaussian deconvolution of the RL emission spectra was carried out using a total of seven emission bands between 1.5 and 4.5 eV, and the behavior of these bands was examined as a function of the annealing temperature. An emission band at ∼3.44 eV (360 nm) was found to be strongly enhanced when the annealing temperature was increased to 500 °C, and this band underwent a significant reduction in intensity with further increase in temperature. Furthermore, a new emission band at ∼3.73 eV (330 nm) became apparent for annealing temperatures in the range 600–700 °C. These new experimental results are discussed within the context of the model presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Qing emperors, who ruled over China from 1644-1911, managed to bring large parts of Inner Asia under their control and extended the territory of China to an unprecedented degree. This paper maintains that the political technique of patronage with its formalized language, its emphasis on gift exchange and expressions of courtesy is a useful concept for explaining the integration of Inner Asian confederations into the empire. By re-interpreting the obligations of gift exchange, the Qing transformed the network of personal relationships, which had to be reinforced and consolidated permanently into a system with clearly defined rules. In this process of formalization, the Lifanyuan, the Court for the Administration of the Outer Regions, played a key role. While in the early years of the dynasty, it was responsible for collecting and disseminating information concerning the various patronage relationships with Inner Asian leaders, over the course of the 17th and 18th centuries its efforts were directed at standardizing and streamlining the contacts between ethnic minorities and the state. Through the Lifanyuan, the rules and principles of patronage were maintained in a modified form even in the later part of the dynasty, when the Qing exercised control in the outer regions more directly. The paper provides an explanation for the longevity and cohesiveness of the multi-ethnic Qing empire. Based on recently published Manchu and Mongolian language archival material and the Maussian concept of gift exchange the study sheds new light on the changing self-conception of the Qing emperors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ab initio calculations of Afρ are presented using Mie scattering theory and a Direct Simulation Monte Carlo (DSMC) dust outflow model in support of the Rosetta mission and its target 67P/Churyumov-Gerasimenko (CG). These calculations are performed for particle sizes ranging from 0.010 μm to 1.0 cm. The present status of our knowledge of various differential particle size distributions is reviewed and a variety of particle size distributions is used to explore their effect on Afρ , and the dust mass production View the MathML sourcem˙. A new simple two parameter particle size distribution that curtails the effect of particles below 1 μm is developed. The contributions of all particle sizes are summed to get a resulting overall Afρ. The resultant Afρ could not easily be predicted a priori and turned out to be considerably more constraining regarding the mass loss rate than expected. It is found that a proper calculation of Afρ combined with a good Afρ measurement can constrain the dust/gas ratio in the coma of comets as well as other methods presently available. Phase curves of Afρ versus scattering angle are calculated and produce good agreement with observational data. The major conclusions of our calculations are: – The original definition of A in Afρ is problematical and Afρ should be: qsca(n,λ)×p(g)×f×ρqsca(n,λ)×p(g)×f×ρ. Nevertheless, we keep the present nomenclature of Afρ as a measured quantity for an ensemble of coma particles.– The ratio between Afρ and the dust mass loss rate View the MathML sourcem˙ is dominated by the particle size distribution. – For most particle size distributions presently in use, small particles in the range from 0.10 to 1.0 μm contribute a large fraction to Afρ. – Simplifying the calculation of Afρ by considering only large particles and approximating qsca does not represent a realistic model. Mie scattering theory or if necessary, more complex scattering calculations must be used. – For the commonly used particle size distribution, dn/da ∼ a−3.5 to a−4, there is a natural cut off in Afρ contribution for both small and large particles. – The scattering phase function must be taken into account for each particle size; otherwise the contribution of large particles can be over-estimated by a factor of 10. – Using an imaginary index of refraction of i = 0.10 does not produce sufficient backscattering to match observational data. – A mixture of dark particles with i ⩾ 0.10 and brighter silicate particles with i ⩽ 0.04 matches the observed phase curves quite well. – Using current observational constraints, we find the dust/gas mass-production ratio of CG at 1.3 AU is confined to a range of 0.03–0.5 with a reasonably likely value around 0.1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a power-scalable approach for yellow laser-light generation based on standard Ytterbium (Yb) doped fibers. To force the cavity to lase at 1154 nm, far above the gain-maximum, measures must be taken to fulfill lasing condition and to suppress competing amplified spontaneous emission (ASE) in the high-gain region. To prove the principle we built a fiber-laser cavity and a fiber-amplifier both at 1154 nm. In between cavity and amplifier we suppressed the ASE by 70 dB using a fiber Bragg grating (FBG) based filter. Finally we demonstrated efficient single pass frequency doubling to 577 nm with a periodically poled lithium niobate crystal (PPLN). With our linearly polarized 1154 nm master oscillator power fiber amplifier (MOFA) system we achieved slope efficiencies of more than 15 % inside the cavity and 24 % with the fiber-amplifier. The frequency doubling followed the predicted optimal efficiency achievable with a PPLN crystal. So far we generated 1.5 W at 1154nm and 90 mW at 577 nm. Our MOFA approach for generation of 1154 nm laser radiation is power-scalable by using multi-stage amplifiers and large mode-area fibers and is therefore very promising for building a high power yellow laser-light source of several tens of Watt.