948 resultados para sources of guidance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Doxorubicin (DOX) is a potent available antitumor agent; however, its clinical use is limited because of its cardiotoxicity. Cell death is a key component in DOX-induced cardiotoxicity, but its mechanisms are elusive. Here, we explore the role of superoxide, nitric oxide (NO), and peroxynitrite in DOX-induced cell death using both in vivo and in vitro models of cardiotoxicity. Western blot analysis, real-time PCR, immunohistochemistry, flow cytometry, fluorescent microscopy, and biochemical assays were used to determine the markers of apoptosis/necrosis and sources of NO and superoxide and their production. Left ventricular function was measured by a pressure-volume system. We demonstrated increases in myocardial apoptosis (caspase-3 cleavage/activity, cytochrome c release, and TUNEL), inducible NO synthase (iNOS) expression, mitochondrial superoxide generation, 3-nitrotyrosine (NT) formation, matrix metalloproteinase (MMP)-2/MMP-9 gene expression, poly(ADP-ribose) polymerase activation [without major changes in NAD(P)H oxidase isoform 1, NAD(P)H oxidase isoform 2, p22(phox), p40(phox), p47(phox), p67(phox), xanthine oxidase, endothelial NOS, and neuronal NOS expression] and decreases in myocardial contractility, catalase, and glutathione peroxidase activities 5 days after DOX treatment to mice. All these effects of DOX were markedly attenuated by peroxynitrite scavengers. Doxorubicin dose dependently increased mitochondrial superoxide and NT generation and apoptosis/necrosis in cardiac-derived H9c2 cells. DOX- or peroxynitrite-induced apoptosis/necrosis positively correlated with intracellular NT formation and could be abolished by peroxynitrite scavengers. DOX-induced cell death and NT formation were also attenuated by selective iNOS inhibitors or in iNOS knockout mice. Various NO donors when coadministered with DOX but not alone dramatically enhanced DOX-induced cell death with concomitant increased NT formation. DOX-induced cell death was also attenuated by cell-permeable SOD but not by cell-permeable catalase, the xanthine oxidase inhibitor allopurinol, or the NADPH oxidase inhibitors apocynine or diphenylene iodonium. Thus, peroxynitrite is a major trigger of DOX-induced cell death both in vivo and in vivo, and the modulation of the pathways leading to its generation or its effective neutralization can be of significant therapeutic benefit.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the large number of studies addressing the quantification of phosphorus (P) availability by different extraction methods, many questions remain unanswered. The aim of this paper was to compare the effectiveness of the extractors Mehlich-1, Anionic Resin (AR) and Mixed Resin (MR), to determine the availability of P under different experimental conditions. The laboratory study was arranged in randomized blocks in a [(3 x 3 x 2) + 3] x 4 factorial design, with four replications, testing the response of three soils with different texture: a very clayey Red Latosol (LV), a sandy clay loam Red Yellow Latosol (LVA), and a sandy loam Yellow Latosol (LA), to three sources (triple superphosphate, reactive phosphate rock from Gafsa-Tunisia; and natural phosphate from Araxá-Minas Gerais) at two P rates (75 and 150 mg dm-3), plus three control treatments (each soil without P application) after four contact periods (15, 30, 60, and 120 days) of the P sources with soil. The soil acidity of LV and LVA was adjusted by raising base saturation to 60 % with the application of CaCO3 and MgCO3 at a 4:1 molar ratio (LA required no correction). These samples were maintained at field moisture capacity for 30 days. After the contact periods, the samples were collected to quantify the available P concentrations by the three extractants. In general, all three indicated that the available P-content in soils was reduced after longer contact periods with the P sources. Of the three sources, this reduction was most pronounced for triple superphosphate, intermediate for reactive phosphate, while Araxá phosphate was least sensitive to the effect of time. It was observed that AR extracted lower P levels from all three soils when the sources were phosphate rocks, while MR extracted values close to Mehlich-1 in LV (clay) and LVA (medium texture) for reactive phosphate. For Araxá phosphate, much higher P values were determined by Mehlich-1 than by the resins, because of the acidity of the extractor. For triple superphosphate, both resins extracted higher P levels than Mehlich-1, due to the consumption of this extractor, particularly when used for LV and LVA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In Brazilian agriculture, urea is the most commonly used nitrogen (N) source, in spite of having the disadvantage of losing considerable amounts of N by ammonia-N volatilization. The objectives of this study were to evaluate: N lossby ammonia volatilization from: [urea coated with copper sulfate and boric acid], [urea coated with zeolite], [urea+ammonium sulfate], [urea coated with copper sulfate and boric acid+ammonium sulfate], [common urea] and [ammonium nitrate]; and the effect of these N source son the maize yield in terms of amount and quality. The treatments were applied to the surface of a soil under no-tillage maize, in two growing seasons. The first season (2009/2010) was after a maize crop (maize straw left on the soil surface) and the second cycle (2012/2011) after a soybean crop. Due to the weather conditions during the experiments, the volatilization of ammonia-N was highest in the first four days after application of the N sources. Of all urea sources, under volatilization-favorable conditions, the loss of ammonia from urea coated with copper sulfate and boric acid was lowest, while under high rainfall, the losses from the different urea sources was similar, i.e., an adequate rainfall was favorablet o reduce volatilization. The ammonia volatilization losses were greatest in the first four days after application. Maize grain yield differed due to N application and in the treatments, but this was only observed with cultivation of maize crop residues in 2009/2010. The combination of ammonium+urea coated with copper sulfate and boric acid optimized grain yield compared to the other urea treatments. The crude protein concentration in maize was not influenced by the technologies of urea coating.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical aberration due to the nonflatness of spatial light modulators used in holographic optical tweezers significantly deteriorates the quality of the trap and may easily prevent stable trapping of particles. We use a Shack-Hartmann sensor to measure the distorted wavefront at the modulator plane; the conjugate of this wavefront is then added to the holograms written into the display to counteract its own curvature and thus compensate the optical aberration of the system. For a Holoeye LC-R 2500 reflective device, flatness is improved from 0.8¿ to ¿/16 (¿=532 nm), leading to a diffraction-limited spot at the focal plane of the microscope objective, which makes stable trapping possible. This process could be fully automated in a closed-loop configuration and would eventually allow other sources of aberration in the optical setup to be corrected for.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coalescing compact binary systems are important sources of gravitational waves. Here we investigate the detectability of this gravitational radiation by the recently proposed laser interferometers. The spectral density of noise for various practicable configurations of the detector is also reviewed. This includes laser interferometers with delay lines and Fabry-Prot cavities in the arms, both in standard and dual recycling arrangements. The sensitivity of the detector in all those configurations is presented graphically and the signal-to-noise ratio is calculated numerically. For all configurations we find values of the detector's parameters which maximize the detectability of coalescing binaries, the discussion comprising Newtonian- as well as post-Newtonian-order effects. Contour plots of the signal-to-noise ratio are also presented in certain parameter domains which illustrate the interferometer's response to coalescing binary signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Different microscopic models exhibiting self-organized criticality are studied numerically and analytically. Numerical simulations are performed to compute critical exponents, mainly the dynamical exponent, and to check universality classes. We find that various models lead to the same exponent, but this universality class is sensitive to disorder. From the dynamic microscopic rules we obtain continuum equations with different sources of noise, which we call internal and external. Different correlations of the noise give rise to different critical behavior. A model for external noise is proposed that makes the upper critical dimensionality equal to 4 and leads to the possible existence of a phase transition above d=4. Limitations of the approach of these models by a simple nonlinear equation are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RATIONALE: Many sources of conflict exist in intensive care units (ICUs). Few studies recorded the prevalence, characteristics, and risk factors for conflicts in ICUs. OBJECTIVES: To record the prevalence, characteristics, and risk factors for conflicts in ICUs. METHODS: One-day cross-sectional survey of ICU clinicians. Data on perceived conflicts in the week before the survey day were obtained from 7,498 ICU staff members (323 ICUs in 24 countries). MEASUREMENTS AND MAIN RESULTS: Conflicts were perceived by 5,268 (71.6%) respondents. Nurse-physician conflicts were the most common (32.6%), followed by conflicts among nurses (27.3%) and staff-relative conflicts (26.6%). The most common conflict-causing behaviors were personal animosity, mistrust, and communication gaps. During end-of-life care, the main sources of perceived conflict were lack of psychological support, absence of staff meetings, and problems with the decision-making process. Conflicts perceived as severe were reported by 3,974 (53%) respondents. Job strain was significantly associated with perceiving conflicts and with greater severity of perceived conflicts. Multivariate analysis identified 15 factors associated with perceived conflicts, of which 6 were potential targets for future intervention: staff working more than 40 h/wk, more than 15 ICU beds, caring for dying patients or providing pre- and postmortem care within the last week, symptom control not ensured jointly by physicians and nurses, and no routine unit-level meetings. CONCLUSIONS: Over 70% of ICU workers reported perceived conflicts, which were often considered severe and were significantly associated with job strain. Workload, inadequate communication, and end-of-life care emerged as important potential targets for improvement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Investigation of violent death, especially cases of sharp trauma and gunshot, is an important part of medico-legal investigations. Beside the execution of a conventional autopsy, the performance of a post-mortem Multi-Detector Computed Tomography (MDCT)-scan has become a highly appreciated tool. In order to investigate also the vascular system, post-mortem CT-angiography has been introduced. The most studied and widespread technique is the Multi-phase post-mortem CT-angiography (MPMCTA). Its sensitivity to detect vascular lesions is even superior to conventional autopsy. The application of MPMCTA for cases of gunshot and sharp-trauma is therefore an obvious choice, as vascular lesions are common in such victims. In most cases of sharp trauma and in several cases of gunshots, death can be attributed to exsanguinations. MPMCTA is able to detect the exact source of bleeding and also to visualize trajectories, which are of most importance in these cases. The reconstructed images allow to clearly visualizing the trajectory in a way that is easily comprehensible for not medically trained legal professionals. The sensitivity of MPMCTA for soft tissue and organ lesions approximately matches the sensitivity of conventional autopsy. However, special care, experience and effective use of the imaging software is necessary for performing the reconstructions of the trajectory. Large volume consuming haemorrhages and shift of inner organs are sources of errors and misinterpretations. This presentation shall give an overview about the advantages and limitations of the use of MPMCTA for investigating cases of gunshot and sharp-trauma.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drilled shafts have been used in the US for more than 100 years in bridges and buildings as a deep foundation alternative. For many of these applications, the drilled shafts were designed using the Working Stress Design (WSD) approach. Even though WSD has been used successfully in the past, a move toward Load Resistance Factor Design (LRFD) for foundation applications began when the Federal Highway Administration (FHWA) issued a policy memorandum on June 28, 2000.The policy memorandum requires all new bridges initiated after October 1, 2007, to be designed according to the LRFD approach. This ensures compatibility between the superstructure and substructure designs, and provides a means of consistently incorporating sources of uncertainty into each load and resistance component. Regionally-calibrated LRFD resistance factors are permitted by the American Association of State Highway and Transportation Officials (AASHTO) to improve the economy and competitiveness of drilled shafts. To achieve this goal, a database for Drilled SHAft Foundation Testing (DSHAFT) has been developed. DSHAFT is aimed at assimilating high quality drilled shaft test data from Iowa and the surrounding regions, and identifying the need for further tests in suitable soil profiles. This report introduces DSHAFT and demonstrates its features and capabilities, such as an easy-to-use storage and sharing tool for providing access to key information (e.g., soil classification details and cross-hole sonic logging reports). DSHAFT embodies a model for effective, regional LRFD calibration procedures consistent with PIle LOad Test (PILOT) database, which contains driven pile load tests accumulated from the state of Iowa. PILOT is now available for broader use at the project website: http://srg.cce.iastate.edu/lrfd/. DSHAFT, available in electronic form at http://srg.cce.iastate.edu/dshaft/, is currently comprised of 32 separate load tests provided by Illinois, Iowa, Minnesota, Missouri and Nebraska state departments of transportation and/or department of roads. In addition to serving as a manual for DSHAFT and providing a summary of the available data, this report provides a preliminary analysis of the load test data from Iowa, and will open up opportunities for others to share their data through this quality–assured process, thereby providing a platform to improve LRFD approach to drilled shafts, especially in the Midwest region.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Didactic knowledge about contents is constructed through an idiosyncratic synthesis between knowledge about the subject area, students' general pedagogical knowledge and the teacher's biography. This study aimed to understand the construction process and the sources of Pedagogical Content Knowledge, as well as to analyze its manifestations and variations in interactive teaching by teachers whom the students considered competent. Data collection involved teachers from an undergraduate nursing program in the South of Brazil, through non-participant observation and semistructured interviews. Data analysis was submitted to the constant comparison method. The results disclose the need for initial education to cover pedagogical aspects for nurses; to assume permanent education as fundamental in view of the complexity of contents and teaching; to use mentoring/monitoring and the value learning with experienced teachers with a view to the development of quality teaching.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is intuitively obvious that snow or ice on a road surface will make that surface more slippery and thus more hazardous. However, quantifying this slipperiness by measuring the friction between the road surface and a vehicle is rather difficult. If such friction readings could be easily made, they might provide a means to control winter maintenance activities more efficiently than at present. This study is a preliminary examination of the possibility of using friction as an operational tool in winter maintenance. In particular, the relationship of friction to traffic volume and speed, and accident rates is examined, and the current lack of knowledge in this area is outlined. The state of the art of friction measuring techniques is reviewed. A series of experiments whereby greater knowledge of how friction deteriorates during a storm and is restored by treatment is proposed. The relationship between plowing forces and the ice-pavement bond strength is discussed. The challenge of integrating all these potential sources of information into a useful final product is presented together with a potential approach. A preliminary cost-benefit analysis of friction measuring devices is performed and suggests that considerable savings might be realized if certain assumptions should hold true. The steps required to bring friction from its current state as a research tool to full deployment as an operational tool are presented and discussed. While much remains to be done in this regard, it is apparent that friction could be an extremely effective operational tool in winter maintenance activities of the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new device for the analyses of nurses' satisfaction has been developed and validated on two types of general and intensive treatments at the University Hospital in Vaudois, Switzerland. A questionnaire has been elaborated for identifying the variables linked with characteristics of the nurse's work, as well as personal variables of the employer which could have an influence on the level of satisfaction. In identifying the sources of satisfaction and dissatisfaction, it has been possible to propose recommendations and corrective measures in order to improve the level of global satisfaction of the nursing team.