883 resultados para Critical Media Studies
Mechanisms underlying cytotoxicity induced by engineered nanomaterials: a review of in vitro studies
Resumo:
Engineered nanomaterials are emerging functional materials with technologically interesting properties and a wide range of promising applications, such as drug delivery devices, medical imaging and diagnostics, and various other industrial products. However, concerns have been expressed about the risks of such materials and whether they can cause adverse effects. Studies of the potential hazards of nanomaterials have been widely performed using cell models and a range of in vitro approaches. In the present review, we provide a comprehensive and critical literature overview on current in vitro toxicity test methods that have been applied to determine the mechanisms underlying the cytotoxic effects induced by the nanostructures. The small size, surface charge, hydrophobicity and high adsorption capacity of nanomaterial allow for specific interactions within cell membrane and subcellular organelles, which in turn could lead to cytotoxicity through a range of different mechanisms. Finally, aggregating the given information on the relationships of nanomaterial cytotoxic responses with an understanding of its structure and physicochemical properties may promote the design of biologically safe nanostructures.
Resumo:
Työssä tutkitaan, kuinka Symbian käyttöjärjestelmälle voidaan tehdä siirrettäviä ohjelmia. Työssä käydään läpi menetelmiä, jotka helpottavat ohjelmistojen siirrettävyyttä uudelle alustalle. Uuteen älypuhelimeen voi tulla monia uusia komponentteja. Laite voi muuttua piiritasolla, käyttöjärjestelmästä voi tulla uusi versio sekä siirrettävästä ohjelmasta voi tulla uusi versio. Kaikki nämä vaikuttavat ohjelman siirrettävyyteen. Työssä tehtiin Java-rajapinnan siirto uudelle alustalle. Prosessin aikana löydettiin tärkeitä tekijöitä, jotka vaikuttavat ohjelmiston siirrettävyyteen. Siirrettävyys sinänsä pitäisi ottaa huomioon ohjelmistoprosessin jokaisessa vaiheessa. Älypuhelimista tulee jatkuvasti uusia versioita. Tämä tekee ohjelmien siirrettävyydestä hyvin tärkeän tekijän ohjelmistojen suunnittelussa. Hyvin suunniteltu ohjelma on helpompi ylläpitää, päivättää ja siirtää myöhemmin.
Resumo:
The application of contrast media in post-mortem radiology differs from clinical approaches in living patients. Post-mortem changes in the vascular system and the absence of blood flow lead to specific problems that have to be considered for the performance of post-mortem angiography. In addition, interpreting the images is challenging due to technique-related and post-mortem artefacts that have to be known and that are specific for each applied technique. Although the idea of injecting contrast media is old, classic methods are not simply transferable to modern radiological techniques in forensic medicine, as they are mostly dedicated to single-organ studies or applicable only shortly after death. With the introduction of modern imaging techniques, such as post-mortem computed tomography (PMCT) and post-mortem magnetic resonance (PMMR), to forensic death investigations, intensive research started to explore their advantages and limitations compared to conventional autopsy. PMCT has already become a routine investigation in several centres, and different techniques have been developed to better visualise the vascular system and organ parenchyma in PMCT. In contrast, the use of PMMR is still limited due to practical issues, and research is now starting in the field of PMMR angiography. This article gives an overview of the problems in post-mortem contrast media application, the various classic and modern techniques, and the issues to consider by using different media.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
OBJECTIVES: Randomized clinical trials that enroll patients in critical or emergency care (acute care) setting are challenging because of narrow time windows for recruitment and the inability of many patients to provide informed consent. To assess the extent that recruitment challenges lead to randomized clinical trial discontinuation, we compared the discontinuation of acute care and nonacute care randomized clinical trials. DESIGN: Retrospective cohort of 894 randomized clinical trials approved by six institutional review boards in Switzerland, Germany, and Canada between 2000 and 2003. SETTING: Randomized clinical trials involving patients in an acute or nonacute care setting. SUBJECTS AND INTERVENTIONS: We recorded trial characteristics, self-reported trial discontinuation, and self-reported reasons for discontinuation from protocols, corresponding publications, institutional review board files, and a survey of investigators. MEASUREMENTS AND MAIN RESULTS: Of 894 randomized clinical trials, 64 (7%) were acute care randomized clinical trials (29 critical care and 35 emergency care). Compared with the 830 nonacute care randomized clinical trials, acute care randomized clinical trials were more frequently discontinued (28 of 64, 44% vs 221 of 830, 27%; p = 0.004). Slow recruitment was the most frequent reason for discontinuation, both in acute care (13 of 64, 20%) and in nonacute care randomized clinical trials (7 of 64, 11%). Logistic regression analyses suggested the acute care setting as an independent risk factor for randomized clinical trial discontinuation specifically as a result of slow recruitment (odds ratio, 4.00; 95% CI, 1.72-9.31) after adjusting for other established risk factors, including nonindustry sponsorship and small sample size. CONCLUSIONS: Acute care randomized clinical trials are more vulnerable to premature discontinuation than nonacute care randomized clinical trials and have an approximately four-fold higher risk of discontinuation due to slow recruitment. These results highlight the need for strategies to reliably prevent and resolve slow patient recruitment in randomized clinical trials conducted in the critical and emergency care setting.
Resumo:
ISSUES: There have been reviews on the association between density of alcohol outlets and harm including studies published up to December 2008. Since then the number of publications has increased dramatically. The study reviews the more recent studies with regard to their utility to inform policy. APPROACH: A systematic review found more than 160 relevant studies (published between January 2009 and October 2014). The review focused on: (i) outlet density and assaultive or intimate partner violence; (ii) studies including individual level data; or (iii) 'natural experiments'. KEY FINDINGS: Despite overall evidence for an association between density and harm, there is little evidence on causal direction (i.e. whether demand leads to more supply or increased availability increases alcohol use and harm). When outlet types (e.g. bars, supermarkets) are analysed separately, studies are too methodologically diverse and partly contradictory to permit firm conclusions besides those pertaining to high outlet densities in areas such as entertainment districts. Outlet density commonly had little effect on individual-level alcohol use, and the few 'natural experiments' on restricting densities showed little or no effects. IMPLICATIONS AND CONCLUSIONS: Although outlet densities are likely to be positively related to alcohol use and harm, few policy recommendations can be given as effects vary across study areas, outlet types and outlet cluster size. Future studies should examine in detail outlet types, compare different outcomes associated with different strengths of association with alcohol, analyse non-linear effects and compare different methodologies. Purely aggregate-level studies examining total outlet density only should be abandoned. [Gmel G, Holmes J, Studer J. Are alcohol outlet densities strongly associated with alcohol-related outcomes? A critical review of recent evidence. Drug Alcohol Rev 2015].
Resumo:
Peer-reviewed
Resumo:
Study design: A retrospective study of image guided cervical implant placement precision. Objective: To describe a simple and precise classification of cervical critical screw placement. Summary of Background Data: "Critical" screw placement is defined as implant insertion into a bone corridor which is surrounded circumferentially by neurovascular structures. While the use of image guidance has improved accuracy, there is currently no classification which provides sufficient precision to assess the navigation success of critical cervical screw placement. Methods: Based on postoperative clinical evaluation and CT imaging, the orthogonal view evaluation method (OVEM) is used to classify screw accuracy into grade I (no cortical breach), grade la (screw thread cortical breach), grade II (internal diameter cortical breach) and grade III (major cortical breach causing neural or vascular injury). Grades II and III are considered to be navigation failures, after accounting for bone corridor / screw mismatch (minimal diameter of targeted bone corridor being smaller than an outer screw diameter). Results: A total of 276 screws from 91 patients were classified into grade I (64.9%), grade la (18.1%), and grade II (17.0%). No grade III screw was observed. The overall rate of navigation failure was 13%. Multiple logistic regression indicated that navigational failure was significantly associated with the level of instrumentation and the navigation system used. Navigational failure was rare (1.6%) when the margin around the screw in the bone corridor was larger than 1.5 mm. Conclusions: OVEM evaluation appears to be a useful tool to assess the precision of critical screw placement in the cervical spine. The OVEM validity and reliability need to be addressed. Further correlation with clinical outcomes will be addressed in future studies.
Resumo:
The illicit drug cutting represents a complex problem that requires the sharing of knowledge from addiction studies, toxicology, criminology and criminalistics. Therefore, cutting is not well known by the forensic community. Thus, this review aims at deciphering the different aspects of cutting, by gathering information mainly from criminology and criminalistics. It tackles essentially specificities of cocaine and heroin cutting. The article presents the detected cutting agents (adulterants and diluents), their evolution in time and space and the analytical methodology implemented by forensic laboratories. Furthermore, it discusses when, in the history of the illicit drug, cutting may take place. Moreover, researches studying how much cutting occurs in the country of destination are analysed. Lastly, the reasons for cutting are addressed. According to the literature, adulterants are added during production of the illicit drug or at a relatively high level of its distribution chain (e.g. before the product arrives in the country of destination or just after its importation in the latter). Their addition seems hardly justified by the only desire to increase profits or to harm consumers' health. Instead, adulteration would be performed to enhance or to mimic the illicit drug effects or to facilitate administration of the drug. Nowadays, caffeine, diltiazem, hydroxyzine, levamisole, lidocaïne and phenacetin are frequently detected in cocaine specimens, while paracetamol and caffeine are almost exclusively identified in heroin specimens. This may reveal differences in the respective structures of production and/or distribution of cocaine and heroin. As the relevant information about cutting is spread across different scientific fields, a close collaboration should be set up to collect essential and unified data to improve knowledge and provide information for monitoring, control and harm reduction purposes. More research, on several areas of investigation, should be carried out to gather relevant information.
Resumo:
The transport of macromolecules, such as low-density lipoprotein (LDL), and their accumulation in the layers of the arterial wall play a critical role in the creation and development of atherosclerosis. Atherosclerosis is a disease of large arteries e.g., the aorta, coronary, carotid, and other proximal arteries that involves a distinctive accumulation of LDL and other lipid-bearing materials in the arterial wall. Over time, plaque hardens and narrows the arteries. The flow of oxygen-rich blood to organs and other parts of the body is reduced. This can lead to serious problems, including heart attack, stroke, or even death. It has been proven that the accumulation of macromolecules in the arterial wall depends not only on the ease with which materials enter the wall, but also on the hindrance to the passage of materials out of the wall posed by underlying layers. Therefore, attention was drawn to the fact that the wall structure of large arteries is different than other vessels which are disease-resistant. Atherosclerosis tends to be localized in regions of curvature and branching in arteries where fluid shear stress (shear rate) and other fluid mechanical characteristics deviate from their normal spatial and temporal distribution patterns in straight vessels. On the other hand, the smooth muscle cells (SMCs) residing in the media layer of the arterial wall respond to mechanical stimuli, such as shear stress. Shear stress may affect SMC proliferation and migration from the media layer to intima. This occurs in atherosclerosis and intimal hyperplasia. The study of blood flow and other body fluids and of heat transport through the arterial wall is one of the advanced applications of porous media in recent years. The arterial wall may be modeled in both macroscopic (as a continuous porous medium) and microscopic scales (as a heterogeneous porous medium). In the present study, the governing equations of mass, heat and momentum transport have been solved for different species and interstitial fluid within the arterial wall by means of computational fluid dynamics (CFD). Simulation models are based on the finite element (FE) and finite volume (FV) methods. The wall structure has been modeled by assuming the wall layers as porous media with different properties. In order to study the heat transport through human tissues, the simulations have been carried out for a non-homogeneous model of porous media. The tissue is composed of blood vessels, cells, and an interstitium. The interstitium consists of interstitial fluid and extracellular fibers. Numerical simulations are performed in a two-dimensional (2D) model to realize the effect of the shape and configuration of the discrete phase on the convective and conductive features of heat transfer, e.g. the interstitium of biological tissues. On the other hand, the governing equations of momentum and mass transport have been solved in the heterogeneous porous media model of the media layer, which has a major role in the transport and accumulation of solutes across the arterial wall. The transport of Adenosine 5´-triphosphate (ATP) is simulated across the media layer as a benchmark to observe how SMCs affect on the species mass transport. In addition, the transport of interstitial fluid has been simulated while the deformation of the media layer (due to high blood pressure) and its constituents such as SMCs are also involved in the model. In this context, the effect of pressure variation on shear stress is investigated over SMCs induced by the interstitial flow both in 2D and three-dimensional (3D) geometries for the media layer. The influence of hypertension (high pressure) on the transport of lowdensity lipoprotein (LDL) through deformable arterial wall layers is also studied. This is due to the pressure-driven convective flow across the arterial wall. The intima and media layers are assumed as homogeneous porous media. The results of the present study reveal that ATP concentration over the surface of SMCs and within the bulk of the media layer is significantly dependent on the distribution of cells. Moreover, the shear stress magnitude and distribution over the SMC surface are affected by transmural pressure and the deformation of the media layer of the aorta wall. This work reflects the fact that the second or even subsequent layers of SMCs may bear shear stresses of the same order of magnitude as the first layer does if cells are arranged in an arbitrary manner. This study has brought new insights into the simulation of the arterial wall, as the previous simplifications have been ignored. The configurations of SMCs used here with elliptic cross sections of SMCs closely resemble the physiological conditions of cells. Moreover, the deformation of SMCs with high transmural pressure which follows the media layer compaction has been studied for the first time. On the other hand, results demonstrate that LDL concentration through the intima and media layers changes significantly as wall layers compress with transmural pressure. It was also noticed that the fraction of leaky junctions across the endothelial cells and the area fraction of fenestral pores over the internal elastic lamina affect the LDL distribution dramatically through the thoracic aorta wall. The simulation techniques introduced in this work can also trigger new ideas for simulating porous media involved in any biomedical, biomechanical, chemical, and environmental engineering applications.
Resumo:
This thesis focuses on fibre coalescers whose efficiency is based on the surface properties/characteristics. They have the ability to preferentially wet or interact with one or more of the fluids to be separated. Thus, the interfacial phenomena governing the separation efficiency of the coalescers is investigated depending on physical factors such as flowrates, phase ratios and coalescer packing density. Design of process equipment to produce and separate of the emulsions was carried out.The experimentation was carried out to test the separation efficiency of the coalescing medias, namely fibreglass, polyester I and polyester II. The performances of the coalescing medias were assessed via droplet size information. In conclusion, the objectives (design of process equipment and experimentation) were achieved. Fibre glass was the best coalescing media, next was polyester I and then finally polyester II. Droplets sizes increased with decreased flowrates and increased packing density of the coalescer. Phase ratio had effect on the droplet sizes of the feed but had no effect on the coalescence of droplets of the feed.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Streaming potential measurements for the surface charge characterisation of different filter media types and materials were used. The equipment was developed further so that measurements could be taken along the surfaces, and so that tubular membranes could also be measured. The streaming potential proved to be a very useful tool in the charge analysis of both clean and fouled filter media. Adsorption and fouling could be studied, as could flux, as functions of time. A module to determine the membrane potential was also constructed. The results collected from the experiments conducted with these devices were used in the study of the theory of streaming potential as an electrokinetic phenomenon. Several correction factors, which are derived to take into account the surface conductance and the electrokinetic flow in very narrow capillaries, were tested in practice. The surface materials were studied using FTIR and the results compared with those from the streaming potentials. FTIR analysis was also found to be a useful tool in the characterisation of filters, as well as in the fouling studies. Upon examination of the recorded spectra from different depths in a sample it was possible to determine the adsorption sites. The influence of an external electric field on the cross flow microflltration of a binary protein system was investigated using a membrane electroflltration apparatus. The results showed that a significant improvement could be achieved in membrane filtration by using the measured electrochemical properties to help adjust the process conditions.
Resumo:
This thesis includes several thermal hydraulic analyses related to the Loviisa WER 440 nuclear power plant units. The work consists of experimental studies, analysis of the experiments, analysis of some plant transits and development of a calculational model for calculation of boric acid concentrations in the reactor. In the first part of the thesis, in the case of won of boric acid solution behaviour during long term cooling period of LOCAs, experiments were performed in scaled down test facilities. The experimental data together with the results of RELAPS/MOD3 simulations were used to develop a model for calculations of boric acid concentrations in the reactor during LOCAs. The results of calculations showed that margins to critical concentrations that would lead to boric acid crystallization were large, both in the reactor core and in the lower plenum. This was mainly caused by the fact that water in the primary cooling circuit includes borax (Na)BsO,.IOHZO), which enters the reactor when ECC water is taken from the sump and greatly increases boric acid solubility in water. In the second part, in the case of simulation of horizontal steam generators, experiments were performed with PACTEL integral test loop to simulate loss of feedwater transients. The PACTEL experiments, as well as earlier REWET III natural circulation tests, were analyzed with RELAPS/MOD3 Version Sm5 code. The analysis showed that the code was capable of simulating the main events during the experiments. However, in the case of loss of secondary side feedwater the code was not completely capable to simulate steam superheating in the secondary side of the steam generators. The third part of the work consists of simulations of Loviisa VVER reactor pump trip transients with RELAPSlMODI Eur, RELAPS/MOD3 and CATHARE codes. All three codes were capable to simulate the two selected pump trip transients and no significant differences were found between the results of different codes. Comparison of the calculated results with the data measured in the Loviisa plant also showed good agreement.