927 resultados para Transportation of Injured.
Resumo:
Water-filled portable road safety barriers are a common fixture in road works, however their use of water can be problematic, both in terms of the quantity of water used and the transportation of the water to the installation site. This project aims to develop a new design of portable road safety barrier, which will make novel use of composite and foam materials in order to reduce the barrier’s reliance on water in order to control errant vehicles. The project makes use of finite element (FE) techniques in order to simulate and evaluate design concepts. FE methods and models that have previously been tested and validated will be used in combination in order to provide the most accurate numerical simulations available to drive the project forward. LS-DYNA code is as highly dynamic, non-linear numerical solver which is commonly used in the automotive and road safety industries. Several complex materials and physical interactions are to be simulated throughout the course of the project including aluminium foams, composite laminates and water within the barrier during standardised impact tests. Techniques to be used include FE, smoothed particle hydrodynamics (SPH) and weighted multi-parameter optimisation techniques. A detailed optimisation of several design parameters with specific design goals will be performed with LS-DYNA and LS-OPT, which will require a large number of high accuracy simulations and advanced visualisation techniques. Supercomputing will play a central role in the project, enabling the numerous medium element count simulations necessary in order to determine the optimal design parameters of the barrier to be performed. Supercomputing will also allow the development of useful methods of visualisation results and the production of highly detailed simulations for end-product validation purposes. Efforts thus far have been towards integrating various numerical methods (including FEM, SPH and advanced materials models) together in an efficient and accurate manner. Various designs of joining mechanisms have been developed and are currently being developed into FE models and simulations.
Resumo:
Background: Injury is a leading cause of preventable mortality and morbidity in Australia and the world. Despite this there is little research examining the health related quality of life of adults following general trauma. Methods: A prospective cohort design was used to study adults who presented to hospital following injury. Data regarding injury and demographic details was collected through the routine operation of the Queensland Trauma Registry (QTR). In addition, the short form 36 (SF-36) was mailed to patients approximately 3 months following injury. Results: Participants included 339 injured patients who were hospitalised for ≥24 h in March-June 2003. A secondary group of 145 patients completed the SF-36, but did not have QTR data collected due to hospitalisation being <24 h. Both groups of participants reported significantly lower scores on all subscales of the SF-36 when compared to Australian norms. Conclusions: Health related quality of life of injured survivors is markedly reduced 3 months after injury. Ongoing treatment and support is necessary to improve these health outcomes.
Resumo:
Existing trauma registries in Australia and New Zealand play an important role in monitoring the management of injured patients. Over the past decade, such monitoring has been translated into changes in clinical processes and practices. Monitoring and changes have been ad hoc, as there are currently no Australasian benchmarks for “optimal” injury management. A binational trauma registry is urgently needed to benchmark injury management to improve outcomes for injured patients.
Resumo:
Air transportation of Australian casualties in World War II was initially carried out in air ambulances with an accompanying male medical orderly. By late 1943 with the war effort concentrated in the Pacific, Allied military authorities realised that air transport was needed to move the increasing numbers of casualties over longer distances. The Royal Australian Air Force (RAAF) became responsible for air evacuation of Australian casualties and established a formal medical air evacuation system with trained flight teams early in 1944. Specialised Medical Air Evacuation Transport Units (MAETUs) were established whose sole responsibility was undertaking air evacuations of Australian casualties from the forward operational areas back to definitive medical care. Flight teams consisting of a RAAF nursing sister (registered nurse) and a medical orderly carried out the escort duties. These personnel had been specially trained in Australia for their role. Post-WWII, the RAAF Nursing Service was demobilised with a limited number of nurses being retained for the Interim Air Force. Subsequently, those nurses were offered commissions in the Permanent Air Force. Some of the nurses who remained were air evacuation trained and carried out air evacuations both in Australia and as part of the British Commonwealth Occupation Force in Japan. With the outbreak of the Korean War in June 1950, Australia became responsible for the air evacuation of British Commonwealth casualties from Korea to Japan. With a re-organisation of the Australian forces as part of the British Commonwealth forces, RAAF nurses were posted to undertake air evacuation from Korea and back to Australia from Iwakuni, Japan. By 1952, a specialised casualty staging section was established in Seoul and staffed by RAAF nurses from Iwakuni on a rotation basis. The development of the Australian air evacuation system and the role of the flight nurses are not well documented for the period 1943-1953. The aims of this research are three fold and include documenting the origins and development of the air evacuation system from 1943-1953; analysing and documenting the RAAF nurse’s role and exploring whether any influences or lessons remain valid today. A traditional historical methodology of narrative and then analysis was used to inform the flight nurse’s role within the totality of the social system. Evidence was based on primary data sources mainly held in Defence files, the Australian War Memorial or the National Archives of Australia. Interviews with 12 ex-RAAF nurses from both WWII and the Korean War were conducted to provide information where there were gaps in the primary data and to enable exploration of the flight nurses’ role and their contributions in war of the air evacuation of casualties. Finally, this thesis highlights two lessons that remain valid today. The first is that interoperability of air evacuation systems with other nations is a force multiplier when resources are scarce or limited. Second, the pre-flight assessment of patients was essential and ensured that there were no deaths in-flight.
Resumo:
Objective: To demonstrate properties of the International Classification of the External Cause of Injury (ICECI) as a tool for use in injury prevention research. Methods: The Childhood Injury Prevention Study (CHIPS) is a prospective longitudinal follow up study of a cohort of 871 children 5–12 years of age, with a nested case crossover component. The ICECI is the latest tool in the International Classification of Diseases (ICD) family and has been designed to improve the precision of coding injury events. The details of all injury events recorded in the study, as well as all measured injury related exposures, were coded using the ICECI. This paper reports a substudy on the utility and practicability of using the ICECI in the CHIPS to record exposures. Interrater reliability was quantified for a sample of injured participants using the Kappa statistic to measure concordance between codes independently coded by two research staff. Results: There were 767 diaries collected at baseline and event details from 563 injuries and exposure details from injury crossover periods. There were no event, location, or activity details which could not be coded using the ICECI. Kappa statistics for concordance between raters within each of the dimensions ranged from 0.31 to 0.93 for the injury events and 0.94 and 0.97 for activity and location in the control periods. Discussion: This study represents the first detailed account of the properties of the ICECI revealed by its use in a primary analytic epidemiological study of injury prevention. The results of this study provide considerable support for the ICECI and its further use.
Resumo:
Hot spot identification (HSID) plays a significant role in improving the safety of transportation networks. Numerous HSID methods have been proposed, developed, and evaluated in the literature. The vast majority of HSID methods reported and evaluated in the literature assume that crash data are complete, reliable, and accurate. Crash under-reporting, however, has long been recognized as a threat to the accuracy and completeness of historical traffic crash records. As a natural continuation of prior studies, the paper evaluates the influence that under-reported crashes exert on HSID methods. To conduct the evaluation, five groups of data gathered from Arizona Department of Transportation (ADOT) over the course of three years are adjusted to account for fifteen different assumed levels of under-reporting. Three identification methods are evaluated: simple ranking (SR), empirical Bayes (EB) and full Bayes (FB). Various threshold levels for establishing hotspots are explored. Finally, two evaluation criteria are compared across HSID methods. The results illustrate that the identification bias—the ability to correctly identify at risk sites--under-reporting is influenced by the degree of under-reporting. Comparatively speaking, crash under-reporting has the largest influence on the FB method and the least influence on the SR method. Additionally, the impact is positively related to the percentage of the under-reported PDO crashes and inversely related to the percentage of the under-reported injury crashes. This finding is significant because it reveals that despite PDO crashes being least severe and costly, they have the most significant influence on the accuracy of HSID.
Resumo:
Infrared spectroscopy has been used to characterize and compare four palygorskite mineral samples from China. The position of the main bands identified by infrared spectra is similar, but there are some differences in intensity, which are significant. In addition, several additional bands are observed in the spectra of palygorskite and their impurities. This variability is attributed to differences in the geological environment, such as the degree of weathering and the extent of transportation of the minerals during formation or deposition, and the impurity content in these palygorskites. The bands of water and hydroxyl groups in these spectra of palygorskite samples have been studied. The characteristic band of palygorskite is observed at 1195 cm�1. Another four bands observed at 3480, 3380, 3266 and 3190 cm�1 are attributed to the water molecules in the palygorskite structure. These results suggest that the infrared spectra of palygorskites mineral from different regions are decided not only by the main physicochemical properties of palygorskite, but also by the amount and kind of impurities.
Resumo:
Background In Pacific Island Countries (PICs) the epidemiology of dengue is characterized by long-term transmission of a single dengue virus (DENV) serotype. The emergence of a new serotype in one island country often indicates major outbreaks with this serotype will follow in other PICs. Objectives Filter paper (FP) cards on which whole blood or serum from dengue suspected patients had been dried was evaluated as a method for transportation of this material by standard mail delivery throughout the Pacific. Study design Twenty-two FP-dried whole blood samples collected from patients in New Caledonia and Wallis & Futuna Islands, during DENV-1 and DENV-4 transmission, and 76 FP-dried sera collected from patients in Yap State, Majuro (Republic of Marshall Islands), Tonga and Fiji, before and during outbreaks of DENV-2 in Yap State and DENV-4 in Majuro, were tested for the presence of DENV RNA, by serotype specific RT-PCR, at the Institut Louis Malardé in French Polynesia. Results The serotype of DENV could be determined, by a variety of RT-PCR procedures, in the FP-dried samples after more than three weeks of transport at ambient temperatures. In most cases, the sequencing of the envelope gene to genotype the viruses also was possible. Conclusions The serotype and genotype of DENV can be determined from FP-dried serum or whole blood samples transported over thousands of kilometers at ambient, tropical, temperatures. This simple and low-cost approach to virus identification should be evaluated in isolated and resource poor settings for surveillance for a range of significant viral diseases.
Resumo:
Bridges are important infrastructures of all nations and are required for transportation of goods as well as human. A catastrophic failure can result in loss of lives and enormous financial hardship to the nation. Although various kinds of sensors are now available to monitor the health of the structures due to corrosion, they do not provide permanent and long term measurements. This paper investigates the fabrication of Carbon Nanotube (CNT) based composite sensors for corrosion detection of structures. Multi-wall CNT (MWCNT)/Nafion composite sensors were fabricated to evaluate their electrical properties for corrosion detection. The test specimens were subjected to real life corrosion experimental tests and the results confirm that the electrical resistance of the sensor electrode was dramatically changed due to corrosion.
Resumo:
BACKGROUND: Over the past 10 years, the use of saliva as a diagnostic fluid has gained attention and has become a translational research success story. Some of the current nanotechnologies have been demonstrated to have the analytical sensitivity required for the use of saliva as a diagnostic medium to detect and predict disease progression. However, these technologies have not yet been integrated into current clinical practice and work flow. CONTENT: As a diagnostic fluid, saliva offers advantages over serum because it can be collected noninvasively by individuals with modest training, and it offers a cost-effective approach for the screening of large populations. Gland-specific saliva can also be used for diagnosis of pathology specific to one of the major salivary glands. There is minimal risk of contracting infections during saliva collection, and saliva can be used in clinically challenging situations, such as obtaining samples from children or handicapped or anxious patients, in whom blood sampling could be a difficult act to perform. In this review we highlight the production of and secretion of saliva, the salivary proteome, transportation of biomolecules from blood capillaries to salivary glands, and the diagnostic potential of saliva for use in detection of cardiovascular disease and oral and breast cancers. We also highlight the barriers to application of saliva testing and its advancement in clinical settings. SUMMARY: Saliva has the potential to become a first-line diagnostic sample of choice owing to the advancements in detection technologies coupled with combinations of biomolecules with clinical relevance. (C) 2011 American Association for Clinical Chemistry
Resumo:
Over the past 10 years, the use of saliva as a diagnostic fluid has gained attention and has become a translational research success story. Some of the current nanotechnologies have been demonstrated to have the analytical sensitivity required for the use of saliva as a diagnostic medium to detect and predict disease progression. However, these technologies have not yet been integrated into current clinical practice and work flow. As a diagnostic fluid, saliva offers advantages over serum because it can be collected noninvasively by individuals with modest training, and it offers a cost-effective approach for the screening of large populations. Gland-specific saliva can also be used for diagnosis of pathology specific to one of the major salivary glands. There is minimal risk of contracting infections during saliva collection, and saliva can be used in clinically challenging situations, such as obtaining samples from children or handicapped or anxious patients, in whom blood sampling could be a difficult act to perform. In this review we highlight the production of and secretion of saliva, the salivary proteome, transportation of biomolecules from blood capillaries to salivary glands, and the diagnostic potential of saliva for use in detection of cardiovascular disease and oral and breast cancers. We also highlight the barriers to application of saliva testing and its advancement in clinical settings. Saliva has the potential to become a first-line diagnostic sample of choice owing to the advancements in detection technologies coupled with combinations of biomolecules with clinical relevance.
Resumo:
This thesis addresses the following broad research question: what did it mean to be a disabled Revolutionary War veteran in the early United States during the period from 1776 to roughly 1840? The study approaches the question from two angles: a state-centred one and an experiential one. In both cases, the theoretical framework employed comes from disability studies. Consequently, disability is regarded as a sociocultural phenomenon rather than a medical condition. The state-centred dimension of the study explores the meaning of disability and disabled veterans to the early American state through an examination of the major military pension laws of the period. An analysis of this legislation, particularly the invalid pension acts of 1793 and 1806, indicates that the early United States represents a key period in the development of the modern disability category. The experiential approach, in contrast, shifts the focus of attention away from the state towards the lived experiences of disabled veterans. It seeks to address the issue of whether or not the disabilities of disabled veterans had any significant material impact on their everyday lives. It does this through a comparison of the situation of 153 disabled veterans with that of an equivalent number of nondisabled veterans. The former group received invalid pensions while the latter did not. In comparing the material conditions of disabled and nondisabled veterans, a wide range of primary sources from military records to memoirs and letters are used. The most important sources in this regard are the pension application papers submitted by veterans in the early nineteenth century. These provide us with a unique insight into the everyday lives of veterans. Looking at the issue of experience through the window of the pension files reveals that there was not much difference in the broad contours of disabled and nondisabled veteran life. This finding has implications for the theorisation of disability that are highlighted and discussed in the thesis. The main themes covered in this study are: the wartime experiences of injured American soldiers, the military pension establishment of the early United States and the legal construction of disability, and the post-war working and family lives of disabled veterans. Keywords: disability, early America, veterans, military pensions, disabled people, Revolutionary War, United States, disability theory.
Resumo:
The aim of this study is to examine the relationship of the Roman villa to its environment. The villa was an important feature of the countryside intended both for agricultural production and for leisure. Manuals of Roman agriculture give instructions on how to select a location for an estate. The ideal location was a moderate slope facing east or south in a healthy area and good neighborhood, near good water resources and fertile soils. A road or a navigable river or the sea was needed for transportation of produce. A market for selling the produce, a town or a village, should have been nearby. The research area is the surroundings of the city of Rome, a key area for the development of the villa. The materials used consist of archaeological settlement sites, literary and epigraphical evidence as well as environmental data. The sites include all settlement sites from the 7th century BC to 5th century AD to examine changes in the tradition of site selection. Geographical Information Systems were used to analyze the data. Six aspects of location were examined: geology, soils, water resources, terrain, visibility/viewability and relationship to roads and habitation centers. Geology was important for finding building materials and the large villas from the 2nd century BC onwards are close to sources of building stones. Fertile soils were sought even in the period of the densest settlement. The area is rich in water, both rainfall and groundwater, and finding a water supply was fairly easy. A certain kind of terrain was sought over very long periods: a small spur or ridge shoulder facing preferably south with an open area in front of the site. The most popular villa resorts are located on the slopes visible from almost the entire Roman region. A visible villa served the social and political aspirations of the owner, whereas being in the villa created a sense of privacy. The area has a very dense road network ensuring good connectivity from almost anywhere in the region. The best visibility/viewability, dense settlement and most burials by roads coincide, creating a good neighborhood. The locations featuring the most qualities cover nearly a quarter of the area and more than half of the settlement sites are located in them. The ideal location was based on centuries of practical experience and rationalized by the literary tradition.
Resumo:
This paper establishes reference ranges for hematologic and plasma biochemistry values in wild Black flying-foxes (Pteropus alecto) captured in South East Queensland, Australia. Values were found to be consistent with those of other Pteropus species. Four hundred and forty-seven animals were sampled over 12 months and significant differences were found between age, sex, reproductive and body condition cohorts in the sample population. Mean values for each cohort fell within the determined normal adult reference range, with the exception of elevated levels of alkaline phosphatase in juvenile animals. Hematologic and biochemistry parameters of injured animals showed little or no deviation from the normal reference values for minor injuries, while two animals with more severe injury or abscessation showed leucocytosis, anaemia, thrombocytosis, hyperglobulinemia and hypoalbuminemia.
Resumo:
The knowledge about the optimal rearing conditions, such as water temperature and quality, photoperiod and density, with the understanding of animal nutritional requirements forms the basis of economically stable aquaculture for freshwater crayfish. However, the shift from a natural environment to effective culture conditions induces several changes, not only at the population level, but also at the individual level. The social contacts between conspecifics increase with increasing animal density. The competition for limited resources (e.g. food, shelter, mates) is more severe with the presence of agonistic behaviour and may lead to unequal distribution of these. The objectives of this study were to: 1) study the distribution of a common food resource between communally reared signal crayfish (Pacifastacus leniusculus) and to assign potential feeding hierarchy on the basis of individual food intake measurements, 2) explore the possibilities of size distribution manipulations to affect population dynamics and food intake to improve growth and survival in culture and 3) study the effect of food ration and spatial distribution on food intake and to explore the effect of temperature and food ration on growth and body composition of freshwater crayfish. The feeding ranks between animals were assigned with a new method for individual food intake measurement of communally reared crayfish. This technique has a high feasibility and a great potential to be applied in crayfish aquaculture studies. In this study, signal crayfish showed high size-related variability in food consumption both among individuals within a group (inter-individual) and within individual day-to-day variation (intra-individual). Increased competition for food led to an unequal distribution of this resource and this may be a reason for large growth differences between animals. The consumption was significantly higher when reared individually in comparison with communal housing. These results suggest that communally housed crayfish form a feeding hierarchy and that the animal size is the major factor controlling the position in this hierarchy. The optimisation of the social environment ( social conditions ) was evaluated in this study as a new approach to crayfish aquaculture. The results showed that the absence of conspecifics (individual rearing vs. communal housing) affects growth rate, food intake and the proportion of injured animals, whereas size variation between animals influences the number and duration of agonistic encounters. In addition, animal size had a strong influence on the fighting success of signal crayfish reared in a social milieu with a wide size variation of conspecifics. Larger individuals initiated and won most of the competitions, which suggests size-based social hierarchy of P. leniusculus. This is further supported by the fact that the length and weight gain of smaller animals increased after size grading, maybe because of a better access to the food resource due to diminished social pressure. However, the high dominance index was not based on size under conditions of limited size variation, e.g. those characteristic of restocked natural populations and aquaculture, indicating the important role of behaviour on social hierarchy.