123 resultados para vertically vibrated beds
Resumo:
Battery powered bed movers are increasingly being used within the hospital setting. These energy augmenting devices facilitate the safe movement of beds and patients by healthcare workers. The use of powered bed movers is believed to result in reduced physical efforts on the behalf of staff members, which may be associated with a decreased risk of occupational related injuries. A provisional study was performed in a hospital environment to assess the muscular efforts associated with moving hospital beds both manually and with the aid of a bed mover. The results enable the effects of using bed movers to be quantified.
Resumo:
This study investigated potential palaeoclimate proxies provided by rare earth element (REE) geochemistry in speleothems and in clay mineralogy of cave sediments. Speleothem and sediment samples were collected from a series of cave fill deposits that occurred with rich vertebrate fossil assemblages in and around Mount Etna National Park, Rockhampton (central coastal Queensland). The fossil deposits range from Plio- Pleistocene to Holocene in age (based on uranium/thorium dating) and appear to represent depositional environments ranging from enclosed rainforest to semi-arid grasslands. Therefore, the Mount Etna cave deposits offer the perfect opportunity to test new palaeoclimate tools as they include deposits that span a known significant climate shift on the basis of independent faunal data. The first section of this study investigates the REE distribution of the host limestone to provide baseline geochemistry for subsequent speleothem investigations. The Devonian Mount Etna Beds were found to be more complex than previous literature had documented. The studied limestone massif is overturned, highly recrystallised in parts and consists of numerous allochthonous blocks with different spatial orientations. Despite the complex geologic history of the Mount Etna Beds, Devonian seawater-like REE patterns were recovered in some parts of the limestone and baseline geochemistry was determined for the bulk limestone for comparison with speleothem REE patterns. The second part of the study focused on REE distribution in the karst system and the palaeoclimatic implications of such records. It was found that REEs have a high affinity for calcite surfaces and that REE distributions in speleothems vary between growth bands much more than along growth bands, thus providing a temporal record that may relate to environmental changes. The morphology of different speleothems (i.e., stalactites, stalagmites, and flowstones) has little bearing on REE distributions provided they are not contaminated with particulate fines. Thus, baseline knowledge developed in the study suggested that speleothems were basically comparable for assessing palaeoclimatically controlled variations in REE distributions. Speleothems from rainforest and semi-arid phases were compared and it was found that there are definable differences in REE distribution that can be attributed to climate. In particular during semiarid phases, total REE concentration decreased, LREE became more depleted, Y/Ho increased, La anomalies were more positive and Ce anomalies were more negative. This may reflect more soil development during rainforest phases and more organic particles and colloids, which are known to transport REEs, in karst waters. However, on a finer temporal scale (i.e. growth bands) within speleothems from the same climate regime, no difference was seen. It is suggested that this may be due to inadequate time for soil development changes on the time frames represented by differences in growth band density. The third part of the study was a reconnaissance investigation focused on mineralogy of clay cave sediments, illite/kaolinite ratios in particular, and the potential palaeoclimatic implications of such records. Although the sample distribution was not optimal, the preliminary results suggest that the illite/kaolinite ratio increased during cold and dry intervals, consistent with decreased chemical weathering during those times. The study provides a basic framework for future studies at differing latitudes to further constrain the parameters of the proxy. The identification of such a proxy recorded in cave sediment has broad implications as clay ratios could potentially provide a basic local climate proxy in the absence of fossil faunas and speleothem material. This study suggests that REEs distributed in speleothems may provide information about water throughput and soil formation, thus providing a potential palaeoclimate proxy. It highlights the importance of understanding the host limestone geochemistry and broadens the distribution and potential number of cave field sites as palaeoclimate information no longer relies solely on the presence of fossil faunas and or speleothems. However, additional research is required to better understand the temporal scales required for the proxies to be recognised.
Resumo:
With the world’s largest population of 1.3 billion, China is a rapidly developing country. In line with this development, China’s enormous health system is experiencing an unprecedented series of reforms. According to a recent official government report, China has 300, 000 health organizations, which include 60, 000 hospitals and a total number of 3.07 million beds (China NBoSoP 2006). To provide health services for the national population, as well as the substantial number of visitors, China has 1.93 million doctors and 1.34 million registered nurses (China NBoSoP 2006). From 1984 to 2004, the number of inpatients grew from about 25 to 50 million, with outpatient figures increasing from 1.1 to 1.3 billion (China MoH 2006). The scale of the health system is likely bigger than in any other countries in the world, but the quality of medical services is still among the levels of developing countries. In 2005, approximately 3.8% of inpatients (about 1.5 million)(China NBoSoP 2006) were admitted because of injury and poisoning, which created significant load for the acute health system. These increased figures are at least partly because of the development of the health system and technological health-care advances but, even with such advances, this rapid change in emergency health-care demand has created a very significant burden on existing systems...
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Coal Seam Gas (CSG) is a form of natural gas (mainly methane) sorbed in underground coal beds. To mine this gas, wells are drilled directly into an underground coal seam and groundwater (CSG water) is pumped out to the surface. This lowers the downhole piezometric pressure and enables gas desporption from the coal matrix. In the United States, this gas has been extracted commercially since the 1980s. The economic success of US CSG projects has inspired exploration and development in Australia and New Zealand. In Australia, Queensland’s Bowen and Surat basins have been the subject of increased CSG development over the last decade. CSG growth in other Australian basins has not matured to the same level but exploration and development are taking place at an accelerated pace in the Sydney Basin (Illawarra and the Hunter Valley, NSW) and in the Gunnedah Basin. Similarly, CSG exploration in New Zealand has focused in the Waikato region (Maramarua and Huntly), in the West Coast region (Buller, Reefton, and Greymouth), and in Southland (Kaitangata, Mataura, and Ohai). Figure 1 shows a Shcoeller diagram with CSG samples from selected basins in Australia, New Zealand, and the USA. CSG water from all of these basins exhibit the same geochemical signature – low calcium, low magnesium, high bicarbonate, low sulphate and, sometimes, high chloride. This water quality is a direct result of specific biological and geological processes that have taken part in the formation of CSG. In general, these processes include the weathering of rocks (carbonates, dolomite, and halite), cation exchange with clays (responsible for enhanced sodium and depleted calcium and magnesium), and biogenic processes (accounting for the presence of high bicarbonate concentrations). The salinity of CSG waters tends to be brackish (TDS < 30000 mg/l) with a fairly neutral pH. These particular characteristics need to be taken into consideration when assessing water management and disposal alternatives. Environmental issues associated with CSG water disposal have been prominent in developed basins such as the Powder River Basin (PRB) in the United States. When disposed on the land or used for irrigation, water having a high dissolved salts content may reduce water availability to crops thus affecting crop yield. In addition, the high sodium, low calcium and low magnesium concentrations increase the potential to disperse soils and significantly reduce the water infiltration rate. Therefore, CSG waters need to be properly characterised, treated, and disposed to safeguard the environment without compromising other natural resources.
Resumo:
Virtual Reality (VR) techniques are increasingly being used for education about and in the treatment of certain types of mental illness. Research indicates that VR is delivering on its promised potential to provide enhanced training and treatment outcomes through incorporation of this high-end technology. Schizophrenia is a mental disorder affecting 1-2% of the population, and it is estimated 12-16% of hospital beds in Australia are occupied by patients with psychosis. Tragically, there is also an increased risk of suicide associated with this diagnosis. A significant research project being undertaken across the University of Queensland faculties of Health Sciences and EPSA (Engineering, Physical Sciences and Architecture) has constructed a number of virtual environments that reproduce the phenomena experienced by patients who have psychosis. Symptoms of psychosis include delusions, hallucinations and thought disorder. The VR environment will allow behavioral, exposure therapies to be conducted with exactly controlled exposure stimuli and an expected reduction in risk of harm. This paper reports on the current work of the project, previous stages of software development and the final goal to introduce VR to medical consulting rooms.
Resumo:
Emergency health is a critical component of Australia’s health system and one which is increasingly congested from growing demand and blocked access to inpatient beds. The Emergency Health Services Queensland (EHSQ) study aims to identify the factors driving increased demand for emergency health and to evaluate strategies which may safely reduce the future demand growth. This monograph addresses the characteristics of users of emergency health services with an aim to identify those that appear to contribute to demand growth. This study utilises data on patients treated by Emergency Departments (ED) and Queensland Ambulance Service (QAS) across Queensland. ED data was derived from the Emergency Department Information System (EDIS) for the period 2001-02 through to 2010-11. Ambulance data was extracted from the QAS’ Ambulance Information Management System (AIMS) and electronic Ambulance Report Form (eARF) for the period 2001-02 through to 2009-10. Due to discrepancies and comparability issues for ED data, this monograph compares data from the 2003-04 time period with 2010-11 data for 21 of the reporting EDs. Also a snapshot of users for the 2010-11 financial year for 31 reporting EDs is used to describe the characteristics of users and to compare those characteristics with population demographics. For QAS data, the 2002-03 and 2009-10 time periods were selected for detailed analyses to identify trends. • Demand for emergency health care services is increasing, representing both increased population and increased relative utilisation. Per capita demand for ED attention has increased by 2% per annum over the last decade and for ambulance attention by 3.7% per annum. • The growth in ED demand is prominent in more urgent triage categories with actual decline in less urgent patients. An estimated 55% of patients attend hospital EDs outside of normal working hours. There is no evidence that patients presenting out of hours are significantly different to those presenting within working hours; they have similar triage assessments and outcomes. • Patients suffering from injuries and poisoning comprise 28% of the ED workload (an increase of 65% in the study period), whilst declines of 32% in cardiovascular and circulatory conditions, and musculoskeletal problems have been observed. • 25.6% of patients attending EDs are admitted to hospital. 19% of admitted patients and 7% of patients who die in the ED are triage category 4 or 5 on arrival. • The average age of ED patients is 35.6 years. Demand has grown in all age groups and amongst both men and women. Men have higher utilisation rates for ED in all age groups. The only group where the growth rate in women has exceeded men is in the 20-29 age group; this growth is particularly in the injury and poisoning categories. • Considerable attention has been paid publicly to ED performance criteria. It is worth noting that 50% of all patients were treated within 33 minutes of arrival. • Patients from lower socioeconomic areas appear to have higher utilisation rates and the utilisation rate for indigenous people appears to exceed those of European and other backgrounds. The utilisation rates for immigrant people is generally less than that of Australian born however it has not been possible to eliminate the confounding impact of different age and socioeconomic profiles. • Demand for ambulance service is also increasing at a rate that exceeds population growth. Utilisation rates have increased by an average of 5% per annum in Queensland compared to 3.6% nationally, and the utilisation rate in Queensland is 27% higher than the national average. • The growth in ambulance utilisation has also been amongst the more urgent categories of dispatch and utilisation rates are higher in rural and regional areas than in the metropolitan area. The demand for ambulance increases with age but the growth in demand for ambulance service has been more prominent in younger age groups. These findings contribute significantly to an understanding of the growth in demand for emergency health. It shows that the growth is amongst patients in genuine need of emergency healthcare and public rhetoric that the congestion of emergency health services is due to inappropriate attendees is unable to be substantiated. The consistency of the growth in demand over the last decade reflects not only the changing demographics of the Australian population but also the changes in health status, standards of acute health care and other social factors. The growth is also amongst patients with acute injury and poisoning which is inconsistent with rates of chronic disease as a fundamental driver. We have also interviewed patients in regard to their decision making choices for acute health care and the factors that influence these decisions and this will be the subject of a third Monograph and publications.
Resumo:
Specialist care consultations were identified by two research nurses using documentation in patient records, appointment diaries, electronic billing services and on-site observations at a 441-bed long term care facility. Over a six-month period there were 3333 consultations (a rate of 1511 consultations per year per 100 beds). Most consultations were for general practice (n = 2589, 78%); these consultations were mainly on site (99%), with only 27 taking place off site. There were 744 consultations for specialities other than general practice. A total of 146 events related to an emergency or unplanned hospital admission. The remaining medical consultations (n = 598, 18%) related to 23 medical specialities. The largest number of consultations were for surgery (n = 106), podiatry (n = 100), nursing services including wound care (n = 74), imaging (n = 41) and ophthalmology (n = 40). Many services which are currently being provided on site to metropolitan long-term care facilities could be provided by telehealth in both urban and rural facilities.
Resumo:
Purpose. To compare radiological records of 90 consecutive patients who underwent cemented total hip arthroplasty (THA) with or without use of the Rim Cutter to prepare the acetabulum. Methods. The acetabulum of 45 patients was prepared using the Rim Cutter, whereas the device was not used in the other 45 patients. Postoperative radiographs were evaluated using a digital templating system to measure (1) the positions of the operated hips with respect to the normal, contralateral hips (the centre of rotation of the socket, the height of the centre of rotation from the teardrop, and lateralisation of the centre of rotation from the teardrop) and (2) the uniformity and width of the cement mantle in the 3 DeLee Charnley acetabular zones, and the number of radiolucencies in these zones. Results. The study group showed improved radiological parameters and were closer to the anatomic centre of rotation both vertically (1.5 vs. 3.7 mm, p<0.001) and horizontally (1.8 vs. 4.4 mm, p<0.001) and had consistently thicker and more uniform cement mantles (p<0.001). There were 2 radiolucent lines in the control group but none in the study group. Conclusion. The Rim Cutter resulted in more accurate placement of the centre of rotation of a cemented prosthetic socket, and produced a thicker, more congruent cement mantle with fewer radiolucent lines.
Resumo:
Objective: A literature review to examine the incorporation of respiratory assessment into everyday surgical nursing practice; possible barriers to this; and the relationship to patient outcomes. Primary argument: Escalating demands on intensive care beds have led to highly dependent patients being cared for in general surgical ward areas. This change in patient demographics has meant the knowledge and skills required of registered nurses in these areas has expanded exponentially. The literature supported the notion that postoperative monitoring of vital signs should include the fundamental assessment of respiratory rate; depth and rhythm; work of breathing; use of accessory muscles and symmetrical chest movement; as well as auscultation of lung fields using a stethoscope. Early intervention in response to changes in a patient's respiratory health status impacts positively on patient health outcomes. Substantial support exists for the contention that technologically adept nurses who also possess competent respiratory assessment skills make a difference to respiratory care. Conclusions: Sub-clinical respiratory problems have been demonstrated to contribute to adverse events. There is a paucity of research knowledge as to whether respiratory education programs and associated inservice make a difference to nursing clinical practice. Similarly, the implications for associated respiratory educational needs are not well documented, nor has a research base been sufficiently developed to guide nursing practice. Further research has the potential to influence the future role and function of the registered nurse by determining the importance of respiratory education programs on post-operative patient outcomes.
Resumo:
The unsteady boundary-layer development for thermomagnetic convection of paramagnetic fluids inside a square cavity has been considered in this study. The cavity is placed in a microgravity condition (no gravitation acceleration) and under a uniform magnetic field which acts vertically. A ramp temperature boundary condition is applied on left vertical side wall of the cavity where the temperature initially increases with time up to some specific time and maintain constant thereafter. A distinct magnetic convection boundary layer is developed adjacent to the left vertical wall due to the effect of the magnetic body force generated on the paramagnetic fluid. An improved scaling analysis has been performed using triple-layer integral method and verified by numerical simulations. The Prandtl number has been chosen greater than unity varied over 5-100. Moreover, the effect of various values of the magnetic parameter and magnetic Rayleigh number on the fluid flow and heat transfer has been shown.
Resumo:
One aim of the Australasian Nutrition Care Day Survey was to explore nutrition care practices in acute care hospital wards across Australia and New Zealand. Managers of Dietetic departments completed a questionnaire regarding ward nutrition care practices. Overall, 370 wards from 56 hospitals participated. The median ward size was 28 beds (range: 8–60 beds). Although there was a wide variation in full-time equivalent availability of dietitians (median: 0.3; range: 0–1.4), their involvement in providing nutrition care across ward specialities was signifi cantly higher than other staff members (χ2, p < 0.01). Feeding assistance, available in 89% of the wards, was provided mainly by nursing staff and family members (χ2, p < 0.01). Protected meal times were implemented in 5% (n = 18) of the wards. Fifty-three percent of the wards (n = 192) weighed patients on request and 40% (n = 148) on admission. Routine malnutrition screening was conducted in 63% (n = 232) of the wards and 79% (n = 184) of these wards used the Malnutrition Screening Tool, 16% (n = 37) the Malnutrition Universal Screening Tool, and 5% (n = 11) other tools. Nutrition rescreening was routinely conducted in 20% of the wards. Among wards that implemented nutrition screening, 41% (n = 100) routinely referred patients “at risk” of malnutrition to dietitians as part of their standard protocol for malnutrition management. Results of this study provide new knowledge regarding current nutrition care practice, highlight gaps in existing practice, and can be used to inform improved nutrition care in acute care wards across Australia and New Zealand.
Resumo:
The first representative chemical, structural, and morphological analysis of the solid particles from a single collection surface has been performed. This collection surface sampled the stratosphere between 17 and 19km in altitude in the summer of 1981, and therefore before the 1982 eruptions of El Chichón. A particle collection surface was washed free of all particles with rinses of Freon and hexane, and the resulting wash was directed through a series of vertically stacked Nucleopore filters. The size cutoff for the solid particle collection process in the stratosphere is found to be considerably less than 1 μm. The total stratospheric number density of solid particles larger than 1μm in diameter at the collection time is calculated to be about 2.7×10−1 particles per cubic meter, of which approximately 95% are smaller than 5μm in diameter. Previous classification schemes are expanded to explicitly recognize low atomic number material. With the single exception of the calcium-aluminum-silicate (CAS) spheres all solid particle types show a logarithmic increase in number concentration with decreasing diameter. The aluminum-rich particles are unique in showing bimodal size distributions. In addition, spheres constitute only a minor fraction of the aluminum-rich material. About 2/3 of the particles examined were found to be shards of rhyolitic glass. This abundant volcanic material could not be correlated with any eruption plume known to have vented directly to the stratosphere. The micrometeorite number density calculated from this data set is 5×10−2 micrometeorites per cubic meter of air, an order of magnitude greater than the best previous estimate. At the collection altitude, the maximum collision frequency of solid particles >5μm in average diameter is calculated to be 6.91×10−16 collisions per second, which indicates negligible contamination of extraterrestrial particles in the stratosphere by solid anthropogenic particles.
Resumo:
Characterization of the combustion products released during the burning of commonly used engineering metallic materials may aid in material selection and risk assessment for the design of oxygen systems. The characterization of combustion products in regards to size distribution and morphology gives useful information for systems addressing fire detection. Aluminum rods (3.2-mm diameter cylinders) were vertically mounted inside a combustion chamber and ignited in pressurized oxygen by resistively heating an aluminum/palladium igniter wire attached to the bottom of the test sample. This paper describes the experimental work conducted to establish the particle size distribution and morphology of the resultant combustion products collected after the burning was completed and subsequently analyzed. In general, the combustion products consisted of a re-solidified oxidized slag and many small hollow spheres of size ranging from about 500 nm to 1000 µm in diameter, surfaced with quenched dendritic and grain-like structures. The combustion products were characterized using optical and scanning electron microscopy.