977 resultados para Worst-case dimensioning
Resumo:
A basic prerequisite for in vivo X-ray imaging of the lung is the exact determination of radiation dose. Achieving resolutions of the order of micrometres may become particularly challenging owing to increased dose, which in the worst case can be lethal for the imaged animal model. A framework for linking image quality to radiation dose in order to optimize experimental parameters with respect to dose reduction is presented. The approach may find application for current and future in vivo studies to facilitate proper experiment planning and radiation risk assessment on the one hand and exploit imaging capabilities on the other.
Resumo:
This paper examines the impact of disastrous and ‘ordinary’ floods on human societies in what is now Austria. The focus is on urban areas and their neighbourhoods. Examining institutional sources such as accounts of the bridge masters, charters, statutes and official petitions, it can be shown that city communities were well acquainted with this permanent risk: in fact, an office was established for the restoration of bridges and the maintenance of water defences and large depots for timber and water pipes ensured that the reconstruction of bridges and the system of water supply could start immediately after the floods had subsided. Carpenters and similar groups gained 10 to 20 per cent of their income from the repair of bridges and other flood damage. The construction of houses in endangered zones was adapted in order to survive the worst case experiences. Thus, we may describe those communities living along the central European rivers as ‘cultures of flood management’. This special knowledge vanished, however, from the mid-nineteenth century onwards, when river regulations gave the people a false feeling of security.
Resumo:
Daily we cope with upcoming potentially disadvantageous events. Therefore, it makes sense to be prepared for the worst case. Such a 'pessimistic' bias is reflected in brain activation during emotion processing. Healthy individuals underwent functional neuroimaging while viewing emotional stimuli that were earlier cued ambiguously or unambiguously concerning their emotional valence. Presentation of ambiguously announced pleasant pictures compared with unambiguously announced pleasant pictures resulted in increased activity in the ventrolateral prefrontal, premotor and temporal cortex, and in the caudate nucleus. This was not the case for the respective negative conditions. This indicates that pleasant stimuli after ambiguous cueing provided 'unexpected' emotional input, resulting in the adaptation of brain activity. It strengthens the hypothesis of a 'pessimistic' bias of brain activation toward ambiguous emotional events.
Resumo:
Since we do not know what future holds for us, we prepare for expected emotional events in order to deal with a pleasant or threatening environment. From an evolutionary perspective, it makes sense to be particularly prepared for the worst-case scenario. We were interested to evaluate whether this assumption is reflected in the central nervous information processing associated with expecting visual stimuli of unknown emotional valence. While being scanned with functional magnetic resonance imaging, healthy subjects were cued to expect and then perceive visual stimuli with a known emotional valence as pleasant, unpleasant, and neutral, as well as stimuli of unknown valence that could have been either pleasant or unpleasant. While anticipating pictures of unknown valence, the activity of emotion processing brain areas was similar to activity associated with expecting unpleasant pictures, but there were no areas in which the activity was similar to the activity when expecting pleasant pictures. The activity of the revealed regions, including bilateral insula, right inferior frontal gyrus, medial thalamus, and red nucleus, further correlated with the individual ratings of mood: the worse the mood, the higher the activity. These areas are supposedly involved in a network for internal adaptation and preparation processes in order to act according to potential or certain unpleasant events. Their activity appears to reflect a 'pessimistic' bias by anticipating the events of unknown valence to be unpleasant.
Resumo:
BACKGROUND: Contemporary pacemakers (PMs) are powered by primary batteries with a limited energy-storing capacity. PM replacements because of battery depletion are common and unpleasant and bear the risk of complications. Batteryless PMs that harvest energy inside the body may overcome these limitations. OBJECTIVE: The goal of this study was to develop a batteryless PM powered by a solar module that converts transcutaneous light into electrical energy. METHODS: Ex vivo measurements were performed with solar modules placed under pig skin flaps exposed to different irradiation scenarios (direct sunlight, shade outdoors, and indoors). Subsequently, 2 sunlight-powered PMs featuring a 4.6-cm2 solar module were implanted in vivo in a pig. One prototype, equipped with an energy buffer, was run in darkness for several weeks to simulate a worst-case scenario. RESULTS: Ex vivo, median output power of the solar module was 1963 μW/cm2 (interquartile range [IQR] 1940-2107 μW/cm2) under direct sunlight exposure outdoors, 206 μW/cm2 (IQR 194-233 μW/cm2) in shade outdoors, and 4 μW/cm2 (IQR 3.6-4.3 μW/cm2) indoors (current PMs use approximately 10-20 μW). Median skin flap thickness was 4.8 mm. In vivo, prolonged SOO pacing was performed even with short irradiation periods. Our PM was able to pace continuously at a rate of 125 bpm (3.7 V at 0.6 ms) for 1½ months in darkness. CONCLUSION: Tomorrow's PMs might be batteryless and powered by sunlight. Because of the good skin penetrance of infrared light, a significant amount of energy can be harvested by a subcutaneous solar module even indoors. The use of an energy buffer allows periods of darkness to be overcome.
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
The impact of global climate change on coral reefs is expected to be most profound at the sea surface, where fertilization and embryonic development of broadcast-spawning corals takes place. We examined the effect of increased temperature and elevated CO2 levels on the in vitro fertilization success and initial embryonic development of broadcast-spawning corals using a single male:female cross of three different species from mid- and high-latitude locations: Lyudao, Taiwan (22° N) and Kochi, Japan (32° N). Eggs were fertilized under ambient conditions (27 °C and 500 µatm CO2) and under conditions predicted for 2100 (IPCC worst case scenario, 31 °C and 1000 µatm CO2). Fertilization success, abnormal development and early developmental success were determined for each sample. Increased temperature had a more profound influence than elevated CO2. In most cases, near-future warming caused a significant drop in early developmental success as a result of decreased fertilization success and/or increased abnormal development. The embryonic development of the male:female cross of A. hyacinthus from the high-latitude location was more sensitive to the increased temperature (+4 °C) than the male:female cross of A. hyacinthus from the mid-latitude location. The response to the elevated CO2 level was small and highly variable, ranging from positive to negative responses. These results suggest that global warming is a more significant and universal stressor than ocean acidification on the early embryonic development of corals from mid- and high-latitude locations.
Resumo:
Anthropogenically-modulated reductions in pH, termed ocean acidification, could pose a major threat to the physiological performance, stocks, and biodiversity of calcifiers and may devalue their ecosystem services. Recent debate has focussed on the need to develop approaches to arrest the potential negative impacts of ocean acidification on ecosystems dominated by calcareous organisms. In this study, we demonstrate the role of a discrete (i.e. diffusion) boundary layer (DBL), formed at the surface of some calcifying species under slow flows, in buffering them from the corrosive effects of low pH seawater. The coralline macroalga Arthrocardia corymbosa was grown in a multifactorial experiment with two mean pH levels (8.05 'ambient' and 7.65 a worst case 'ocean acidification' scenario projected for 2100), each with two levels of seawater flow (fast and slow, i.e. DBL thin or thick). Coralline algae grown under slow flows with thick DBLs (i.e., unstirred with regular replenishment of seawater to their surface) maintained net growth and calcification at pH 7.65 whereas those in higher flows with thin DBLs had net dissolution. Growth under ambient seawater pH (8.05) was not significantly different in thin and thick DBL treatments. No other measured diagnostic (recruit sizes and numbers, photosynthetic metrics, %C, %N, %MgCO3) responded to the effects of reduced seawater pH. Thus, flow conditions that promote the formation of thick DBLs, may enhance the subsistence of calcifiers by creating localised hydrodynamic conditions where metabolic activity ameliorates the negative impacts of ocean acidification.
Resumo:
The sustained absorption of anthropogenically released atmospheric CO2 by the oceans is modifying seawater carbonate chemistry, a process termed ocean acidification (OA). By the year 2100, the worst case scenario is a decline in the average oceanic surface seawater pH by 0.3 units to 7.75. The changing seawater carbonate chemistry is predicted to negatively affect many marine species, particularly calcifying organisms such as coralline algae, while species such as diatoms and fleshy seaweed are predicted to be little affected or may even benefit from OA. It has been hypothesized in previous work that the direct negative effects imposed on coralline algae, and the direct positive effects on fleshy seaweeds and diatoms under a future high CO2 ocean could result in a reduced ability of corallines to compete with diatoms and fleshy seaweed for space in the future. In a 6-week laboratory experiment, we examined the effect of pH 7.60 (pH predicted to occur due to ocean acidification just beyond the year 2100) compared to pH 8.05 (present day) on the lateral growth rates of an early successional, cold-temperate species assemblage dominated by crustose coralline algae and benthic diatoms. Crustose coralline algae and benthic diatoms maintained positive growth rates in both pH treatments. The growth rates of coralline algae were three times lower at pH 7.60, and a non-significant decline in diatom growth meant that proportions of the two functional groups remained similar over the course of the experiment. Our results do not support our hypothesis that benthic diatoms will outcompete crustose coralline algae under future pH conditions. However, while crustose coralline algae were able to maintain their presence in this benthic rocky reef species assemblage, the reduced growth rates suggest that they will be less capable of recolonizing after disturbance events, which could result in reduced coralline cover under OA conditions.
Resumo:
The purpose of this study is to determine the critical wear levels of the contact wire of the catenary on metropolitan lines. The study has focussed on the zones of contact wire where localised wear is produced, normally associated with the appearance of electric arcs. To this end, a finite element model has been developed to study the dynamics of pantograph-catenary interaction. The model includes a zone of localised wear and a singularity in the contact wire in order to simulate the worst case scenario from the point of view of stresses. In order to consider the different stages in the wire wear process, different depths and widths of the localised wear zone were defined. The results of the dynamic simulations performed for each stage of wear let the area of the minimum resistant section of the contact wire be determined for which stresses are greater than the allowable stress. The maximum tensile stress reached in the contact wire shows a clear sensitivity to the size of the local wear zone, defined by its width and depth. In this way, if the wear measurements taken with an overhead line recording vehicle are analysed, it will be possible to calculate the potential breakage risk of the wire. A strong dependence of the tensile forces of the contact wire has also been observed. These results will allow priorities to be set for replacing the most critical sections of wire, thereby making maintenance much more efficient. The results obtained show that the wire replacement criteria currently borne in mind have turned out to be appropriate, although in some wear scenarios these criteria could be adjusted even more, and so prolong the life cycle of the contact wire.
Resumo:
Modern FPGAs with Dynamic and Partial Reconfiguration (DPR) feature allow the implementation of complex, yet flexible, hardware systems. Combining this flexibility with evolvable hardware techniques, real adaptive systems, able to reconfigure themselves according to environmental changes, can be envisaged. In this paper, a highly regular and modular architecture combined with a fast reconfiguration mechanism is proposed, allowing the introduction of dynamic and partial reconfiguration in the evolvable hardware loop. Results and use case show that, following this approach, evolvable processing IP Cores can be built, providing intensive data processing capabilities, improving data and delay overheads with respect to previous proposals. Results also show that, in the worst case (maximum mutation rate), average reconfiguration time is 5 times lower than evaluation time.
Resumo:
Systems relying on fixed hardware components with a static level of parallelism can suffer from an underuse of logical resources, since they have to be designed for the worst-case scenario. This problem is especially important in video applications due to the emergence of new flexible standards, like Scalable Video Coding (SVC), which offer several levels of scalability. In this paper, Dynamic and Partial Reconfiguration (DPR) of modern FPGAs is used to achieve run-time variable parallelism, by using scalable architectures where the size can be adapted at run-time. Based on this proposal, a scalable Deblocking Filter core (DF), compliant with the H.264/AVC and SVC standards has been designed. This scalable DF allows run-time addition or removal of computational units working in parallel. Scalability is offered together with a scalable parallelization strategy at the macroblock (MB) level, such that when the size of the architecture changes, MB filtering order is modified accordingly
Resumo:
Abstract. We study the problem of efficient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a top-down framework. In particular, we define the new abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. We use cliques both as an alternative representation and as widening, defining several widening operators. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efficieney gains are obtained. We also derive useful conclusions regarding the interactions between thresholds, precisión, efficieney and cost of widening. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.