818 resultados para Comprehensive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The growing awareness of transfusion-associated morbidity and mortality necessitates investigations into the underlying mechanisms. Small animals have been the dominant transfusion model but have associated limitations. This study aimed to develop a comprehensive large animal (ovine) model of transfusion encompassing: blood collection, processing and storage, compatibility testing right through to post-transfusion outcomes. Materials and methods Two units of blood were collected from each of 12 adult male Merino sheep and processed into 24 ovine-packed red blood cell (PRBC) units. Baseline haematological parameters of ovine blood and PRBC cells were analysed. Biochemical changes in ovine PRBCs were characterized during the 42-day storage period. Immunological compatibility of the blood was confirmed with sera from potential recipient sheep, using a saline and albumin agglutination cross-match. Following confirmation of compatibility, each recipient sheep (n = 12) was transfused with two units of ovine PRBC. Results Procedures for collecting, processing, cross-matching and transfusing ovine blood were established. Although ovine red blood cells are smaller and higher in number, their mean cell haemoglobin concentration is similar to human red blood cells. Ovine PRBC showed improved storage properties in saline–adenine–glucose–mannitol (SAG-M) compared with previous human PRBC studies. Seventy-six compatibility tests were performed and 17·1% were incompatible. Only cross-match compatible ovine PRBC were transfused and no adverse reactions were observed. Conclusion These findings demonstrate the utility of the ovine model for future blood transfusion studies and highlight the importance of compatibility testing in animal models involving homologous transfusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is crucial to advance understanding of the concept of successful aging at work to guide rigorous future research and effective practice. Drawing on the gerontology and life-span developmental literatures, I recently proposed a definition and theoretical framework of successful aging at work that revolve around employees increasingly deviating from average developmental trajectories across the working life span. Based on sustainability, person–job fit, and proactivity theories, Kooij suggested an alternative perspective that emphasizes the active role of employees for successful aging at work. In this article, I compare the 2 approaches and attempt a partial integration. I highlight the importance of a precise definition, comprehensive model, and critical discussion of successful aging at work. Furthermore, I suggest that person–environment fit variables other than person–job fit (e.g., person–organization fit) and adapting to person–environment misfit may also contribute to successful aging at work. Finally, I argue that proactive behaviors must have age-differential effects on work outcomes to be considered personal resources for successful aging at work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pin joints in structures are designed for interference, push or clearance fits. That is, the diameter of the pin is made greater than, equal to or less than the hole diameter. Consider an interference fit pin in a plate subjected to a continuously increasing in-plane load.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstrat is not available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theory of erosive burning has been constructed front first principles using turbulent boundary layer concepts. It is shown that the problem constitutes one of solution of flame propagation equation for turbulent flow. The final approximate solution for the case of single step overall kinetics reveals the combined effects of fluid mechanics and chemical kinetics. The results obtained from this theory are compared with earlier experimental results. The dependence of erosive burning characteristics on various parameters has been elucidated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with the dynamic analysis of flexible,non-linear multi-body beam systems. The focus is on problems where the strains within each elastic body (beam) remain small. Based on geometrically non-linear elasticity theory, the non-linear 3-D beam problem splits into either a linear or non-linear 2-D analysis of the beam cross-section and a non-linear 1-D analysis along the beam reference line. The splitting of the three-dimensional beam problem into two- and one-dimensional parts, called dimensional reduction,results in a tremendous savings of computational effort relative to the cost of three-dimensional finite element analysis,the only alternative for realistic beams. The analysis of beam-like structures made of laminated composite materials requires a much more complicated methodology. Hence, the analysis procedure based on Variational Asymptotic Method (VAM), a tool to carry out the dimensional reduction, is used here.The analysis methodology can be viewed as a 3-step procedure. First, the sectional properties of beams made of composite materials are determined either based on an asymptotic procedure that involves a 2-D finite element nonlinear analysis of the beam cross-section to capture trapeze effect or using strip-like beam analysis, starting from Classical Laminated Shell Theory (CLST). Second, the dynamic response of non-linear, flexible multi-body beam systems is simulated within the framework of energy-preserving and energy-decaying time integration schemes that provide unconditional stability for non-linear beam systems. Finally,local 3-D responses in the beams are recovered, based on the 1-D responses predicted in the second step. Numerical examples are presented and results from this analysis are compared with those available in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An updated catalog of earthquakes has been prepared for the Andaman-Nicobar and adjoining regions. The catalog was homogenized to a unified magnitude scale, and declustering of the catalog was performed to remove aftershocks and foreshocks. Eleven regional source zones were identified in the study area to account for local variability in seismicity characteristics. The seismicity parameters were estimated for each of these source zones, and the seismic hazard evaluation of the Andaman-Nicobar region has been performed using different source models and attenuation relations. Probabilistic seismic hazard analysis has been performed with currently available data and their best possible scientific interpretation using an appropriate instrument such as the logic tree to explicitly account for epistemic uncertainty by considering alternative models (source models, maximum magnitude, and attenuation relationships). The hazard maps for different periods have been produced for horizontal ground motion on the bedrock level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comprehensive study of D-Na center dot center dot center dot A (D = H/F) complexes has been done using advanced ab initio and atoms in molecule (AIM) theoretical analyses. The correlation between electron density at bond critical point and binding energy gives a distinguishing feature for hydrogen bonding, different from the `electrostatic complexes' formed by LiD and NaD. Moreover, the LiD/NaD dimers have both linear and anti-parallel minima, as expected for electrostatic dipole-dipole interactions. The HF dimer has a quasi-linear minimum and the anti-parallel structure is a saddle point. Clearly, characterizing hydrogen bonding as `nothing but electrostatic interaction between two dipoles' is grossly in error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1 degrees x 1 degrees gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller ``missing'' values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.