977 resultados para open circuit potential
Resumo:
The use of information technology (IT) in dentistry is far ranging. In order to produce a working document for the dental educator, this paper focuses on those methods where IT can assist in the education and competence development of dental students and dentists (e.g. e-learning, distance learning, simulations and computer-based assessment). Web pages and other information-gathering devices have become an essential part of our daily life, as they provide extensive information on all aspects of our society. This is mirrored in dental education where there are many different tools available, as listed in this report. IT offers added value to traditional teaching methods and examples are provided. In spite of the continuing debate on the learning effectiveness of e-learning applications, students request such approaches as an adjunct to the traditional delivery of learning materials. Faculty require support to enable them to effectively use the technology to the benefit of their students. This support should be provided by the institution and it is suggested that, where possible, institutions should appoint an e-learning champion with good interpersonal skills to support and encourage faculty change. From a global prospective, all students and faculty should have access to e-learning tools. This report encourages open access to e-learning material, platforms and programs. The quality of such learning materials must have well defined learning objectives and involve peer review to ensure content validity, accuracy, currency, the use of evidence-based data and the use of best practices. To ensure that the developers' intellectual rights are protected, the original content needs to be secure from unauthorized changes. Strategies and recommendations on how to improve the quality of e-learning are outlined. In the area of assessment, traditional examination schemes can be enriched by IT, whilst the Internet can provide many innovative approaches. Future trends in IT will evolve around improved uptake and access facilitated by the technology (hardware and software). The use of Web 2.0 shows considerable promise and this may have implications on a global level. For example, the one-laptop-per-child project is the best example of what Web 2.0 can do: minimal use of hardware to maximize use of the Internet structure. In essence, simple technology can overcome many of the barriers to learning. IT will always remain exciting, as it is always changing and the users, whether dental students, educators or patients are like chameleons adapting to the ever-changing landscape.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.
Resumo:
Creating Lakes from Open Pit Mines: Processes and Considerations, Emphasis on Northern Environments. This document summarizes the literature of mining pit lakes (through 2007), with a particular focus on issues that are likely to be of special relevance to the creation and management of pit lakes in northern climates. Pit lakes are simply waterbodies formed by filling the open pit left upon the completion of mining operations with water. Like natural lakes, mining pit lakes display a huge diversity in each of these subject areas. However, pit lakes are young and therefore are typically in a non-equilibrium state with respect to their rate of filling, water quality, and biology. Separate sections deal with different aspects of pit lakes, including their morphometry, geology, hydrogeology, geochemistry, and biology. Depending on the type and location of the mine, there may be opportunities to enhance the recreational or ecological benefits of a given pit lake, for example, by re-landscaping and re-vegetating the shoreline, by adding engineered habitat for aquatic life, and maintaining water quality. The creation of a pit lake may be a regulatory requirement to mitigate environmental impacts from mining operations, and/or be included as part of a closure and reclamation plan. Based on published case studies of pit lakes, large-scale bio-engineering projects have had mixed success. A common consensus is that manipulation of pit lake chemistry is difficult, expensive, and takes many years to achieve remediation goals. For this reason, it is prudent to take steps throughout mine operation to reduce the likelihood of future water quality problems upon closure. Also, it makes sense to engineer the lake in such a way that it will achieve its maximal end-use potential, whether it be permanent and safe storage of mine waste, habitat for aquatic life, recreation, or water supply.
Resumo:
Social work at global levels, and across international and intercultural divides, is probably more important now than ever before in our history. It may be that the very form our ideas about intercultural work take need to be re-examined in the light of recent global changes and uncertainties. In this short position paper I wish to offer some considerations about how we might approach the field of intercultural social work in order to gain new insights about how we practise at both local and global levels. For me, much of the promise of an intercultural social work (and for the purposes of this paper I see aspects of international social work in much the same light) lies in its focus on the way we categorise ourselves, our ideas and experiences in relation to others. The very notion of intercultural or international social work is based on assumptions about boundaries, differences, ways of differentiating and defining sets of experiences. Whether these are deemed "cultural" or "national" is of less importance. Once we are forced to examine these assumptions, about how and why we categorise ourselves in relation to other people in particular ways, the way is opened up for us to be much more critical about the bases of our own, often very deep-seated, thinking. This understanding, about how and why notions of "difference" operate in the way they do, can potentially open our understanding to all the other ways, besides cultural or national labelling, in which we categorise and create differences between ourselves and others. Intercultural social work, taken as a potential site for understanding the creation of difference then, has the potential to help us critically examine the bases of much of our practice in any setting, since most practice involves some kind of categorisation of phenomena.
Resumo:
Today, Digital Systems and Services for Technology Supported Learning and Education are recognized as the key drivers to transform the way that individuals, groups and organizations “learn” and the way to “assess learning” in 21st Century. These transformations influence: Objectives - moving from acquiring new “knowledge” to developing new and relevant “competences”; Methods – moving from “classroom” based teaching to “context-aware” personalized learning; and Assessment – moving from “life-long” degrees and certifications to “on-demand” and “in-context” accreditation of qualifications. Within this context, promoting Open Access to Formal and Informal Learning, is currently a key issue in the public discourse and the global dialogue on Education, including Massive Open Online Courses (MOOCs) and Flipped School Classrooms. This volume on Digital Systems for Open Access to Formal and Informal Learning contributes to the international dialogue between researchers, technologists, practitioners and policy makers in Technology Supported Education and Learning. It addresses emerging issues related with both theory and practice, as well as, methods and technologies that can support Open Access to Formal and Informal Learning. In the twenty chapters contributed by international experts who are actively shaping the future of Educational Technology around the world, topics such as: - The evolution of University Open Courses in Transforming Learning - Supporting Open Access to Teaching and Learning of People with Disabilities - Assessing Student Learning in Online Courses - Digital Game-based Learning for School Education - Open Access to Virtual and Remote Labs for STEM Education - Teachers’ and Schools’ ICT Competence Profiling - Web-Based Education and Innovative Leadership in a K-12 International School Setting are presented. An in-depth blueprint of the promise, potential, and imminent future of the field, Digital Systems for Open Access to Formal and Informal Learning is necessary reading for researchers and practitioners, as well as, undergraduate and postgraduate students, in educational technology.
Resumo:
Earth observations (EO) represent a growing and valuable resource for many scientific, research and practical applications carried out by users around the world. Access to EO data for some applications or activities, like climate change research or emergency response activities, becomes indispensable for their success. However, often EO data or products made of them are (or are claimed to be) subject to intellectual property law protection and are licensed under specific conditions regarding access and use. Restrictive conditions on data use can be prohibitive for further work with the data. Global Earth Observation System of Systems (GEOSS) is an initiative led by the Group on Earth Observations (GEO) with the aim to provide coordinated, comprehensive, and sustained EO and information for making informed decisions in various areas beneficial to societies, their functioning and development. It seeks to share data with users world-wide with the fewest possible restrictions on their use by implementing GEOSS Data Sharing Principles adopted by GEO. The Principles proclaim full and open exchange of data shared within GEOSS, while recognising relevant international instruments and national policies and legislation through which restrictions on the use of data may be imposed.The paper focuses on the issue of the legal interoperability of data that are shared with varying restrictions on use with the aim to explore the options of making data interoperable. The main question it addresses is whether the public domain or its equivalents represent the best mechanism to ensure legal interoperability of data. To this end, the paper analyses legal protection regimes and their norms applicable to EO data. Based on the findings, it highlights the existing public law statutory, regulatory, and policy approaches, as well as private law instruments, such as waivers, licenses and contracts, that may be used to place the datasets in the public domain, or otherwise make them publicly available for use and re-use without restrictions. It uses GEOSS and the particular characteristics of it as a system to identify the ways to reconcile the vast possibilities it provides through sharing of data from various sources and jurisdictions on the one hand, and the restrictions on the use of the shared resources on the other. On a more general level the paper seeks to draw attention to the obstacles and potential regulatory solutions for sharing factual or research data for the purposes that go beyond research and education.
Resumo:
The in-medium physics of heavy quarkonium is an ideal proving ground for our ability to connect knowledge about the fundamental laws of physics to phenomenological predictions. One possible route to take is to attempt a description of heavy quark bound states at finite temperature through a Schrödinger equation with an instantaneous potential. Here we review recent progress in devising a comprehensive approach to define such a potential from first principles QCD and extract its, in general complex, values from non-perturbative lattice QCD simulations. Based on the theory of open quantum systems we will show how to interpret the role of the imaginary part in terms of spatial decoherence by introducing the concept of a stochastic potential. Shortcomings as well as possible paths for improvement are discussed.
Resumo:
AIMS Device-based pacing-induced diaphragmatic stimulation (PIDS) may have therapeutic potential for chronic heart failure (HF) patients. We studied the effects of PIDS on cardiac function and functional outcomes. METHODS AND RESULTS In 24 chronic HF patients with CRT, an additional electrode was attached to the left diaphragm. Randomized into two groups, patients received the following PIDS modes for 3 weeks in a different sequence: (i) PIDS off (control group); (ii) PIDS 0 ms mode (PIDS simultaneously with ventricular CRT pulse); or (iii) PIDS optimized mode (PIDS with optimized delay to ventricular CRT pulse). For PIDS optimization, acoustic cardiography was used. Effects of each PIDS mode on dyspnoea, power during exercise testing, and LVEF were assessed. Dyspnoea improved with the PIDS 0 ms mode (P = 0.057) and the PIDS optimized mode (P = 0.034) as compared with the control group. Maximal power increased from median 100.5 W in the control group to 104.0 W in the PIDS 0 ms mode (P = 0.092) and 109.5 W in the PIDS optimized mode (P = 0.022). Median LVEF was 33.5% in the control group, 33.0% in the PIDS 0 ms mode, and 37.0% in the PIDS optimized mode (P = 0.763 and P = 0.009 as compared with the control group, respectively). PIDS was asymptomatic in all patients. CONCLUSION PIDS improves dyspnoea, working capacity, and LVEF in chronic HF patients over a 3 week period in addition to CRT. This pilot study demonstrates proof of principle of an innovative technology which should be confirmed in a larger sample. TRIAL REGISTRATION NCT00769678.
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
Medical instrumentation used in diagnosis and treatment relies on the accurate detection and processing of various physiological events and signals. While signal detection technology has improved greatly in recent years, there remain inherent delays in signal detection/ processing. These delays may have significant negative clinical consequences during various pathophysiological events. Reducing or eliminating such delays would increase the ability to provide successful early intervention in certain disorders thereby increasing the efficacy of treatment. In recent years, a physical phenomenon referred to as Negative Group Delay (NGD), demonstrated in simple electronic circuits, has been shown to temporally advance the detection of analog waveforms. Specifically, the output is temporally advanced relative to the input, as the time delay through the circuit is negative. The circuit output precedes the complete detection of the input signal. This process is referred to as signal advance (SA) detection. An SA circuit model incorporating NGD was designed, developed and tested. It imparts a constant temporal signal advance over a pre-specified spectral range in which the output is almost identical to the input signal (i.e., it has minimal distortion). Certain human patho-electrophysiological events are good candidates for the application of temporally-advanced waveform detection. SA technology has potential in early arrhythmia and epileptic seizure detection and intervention. Demonstrating reliable and consistent temporally advanced detection of electrophysiological waveforms may enable intervention with a pathological event (much) earlier than previously possible. SA detection could also be used to improve the performance of neural computer interfaces, neurotherapy applications, radiation therapy and imaging. In this study, the performance of a single-stage SA circuit model on a variety of constructed input signals, and human ECGs is investigated. The data obtained is used to quantify and characterize the temporal advances and circuit gain, as well as distortions in the output waveforms relative to their inputs. This project combines elements of physics, engineering, signal processing, statistics and electrophysiology. Its success has important consequences for the development of novel interventional methodologies in cardiology and neurophysiology as well as significant potential in a broader range of both biomedical and non-biomedical areas of application.
Resumo:
This participatory action-research project addressed the hypothesis that strengthened community and women's capacity for self-development will lead to action to address maternal health problems and the prevention of maternal morbidity and mortality in Mali. Research objectives were: (1) to undertake a comparative cross-sectional study of the association of community capacity with improved maternal health in rural areas of Sanando, Mali, where capacity building interventions have taken place in some villages but not in others. (2) to describe women's maternal health status, access to and use of maternal health services given their residence in program or comparison communities.^ The participatory action research project was an integrated qualitative and quantitative study using participatory rural appraisal exercises, semi-structured group interviews and a cross-sectional survey.^ Factors related to community capacity for self-development were identified: community harmony; an understanding of the benefits of self-development; dynamic leadership; and a structure to implement collective activities.^ A distinct difference between the program and comparison villages was the commitment to train and support traditional birth attendants (TBAs). The TBAs in the program villages work in the context of the wider, integrated self-development program and, 10 years after their initial training, the TBAs continue to practice.^ Many women experience labor and childbirth alone or are attended by an untrained relative in both program and comparison villages. Nevertheless a significant change is apparent, with more women in program villages than in comparison villages being assisted by the TBAs. The delivery practices of the TBAs reveal the positive impact of their training in the "three cleans" (clean hands of the assistant, clean delivery surface and clean cord-cutting). The findings of this study indicate a significant level of unmet need for child spacing methods in all villages.^ The training and support of TBAs in the program villages yielded significant improvements in their delivery practices, and resulting outcomes for women and infants. However, potential exists for further community action. Capacities for self-development have not yet been directed toward an action plan encompassing other Safe Motherhood interventions, including access to family planning services and emergency obstetric care services. ^
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.