798 resultados para Multi-scale hierarchical framework
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.
Resumo:
This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.
Resumo:
This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
Since 2008 the German Federal Ministry of Education and Research (BMBF) has been funding ten transdisciplinary research projects within the thematic focus ‘From Knowledge to Action - New Paths towards Sustainable Consumption’. A particular challenge which is faced with the programm is the task to build a bridge between individual activities and ecological and social framework conditions. Environmental psychologists are involved in half of the ten transdisciplinary projects. The symposium gives an insight into the new thematic focus and the variety of psychological contributions. The presentations will focus on the specific competence of psychology within the broader research focus of the transdisciplinary projects. An invited discussant will reflect on the role of psychology within the field of sustainable consumption and about challenges of transdisciplinary research in general.
Resumo:
Balancing the frequently conflicting priorities of conservation and economic development poses a challenge to management of the Swiss Alps Jungfrau-Aletsch World Heritage Site (WHS). This is a complex societal problem that calls for a knowledge-based solution. This in turn requires a transdisciplinary research framework in which problems are defined and solved cooperatively by actors from the scientific community and the life-world. In this article we re-examine studies carried out in the region of the Swiss Alps Jungfrau-Aletsch WHS, covering three key issues prevalent in transdisciplinary settings: integration of stakeholders into participatory processes; perceptions and positions; and negotiability and implementation. In the case of the Swiss Alps Jungfrau-Aletsch WHS the transdisciplinary setting created a situation of mutual learning among stakeholders from different levels and backgrounds. However, the studies showed that the benefits of such processes of mutual learning are continuously at risk of being diminished by the power play inherent in participatory approaches.
Resumo:
We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.
Resumo:
On the basis of a multi-proxy approach and a strategy combining lacustrine and marine records along a north–south transect, data collected in the central Mediterranean within the framework of a collaborative project have led to reconstruction of high-resolution and well-dated palaeohydrological records and to assessment of their spatial and temporal coherency. Contrasting patterns of palaeohydrological changes have been evidenced in the central Mediterranean: south (north) of around 40° N of latitude, the middle part of the Holocene was characterised by lake-level maxima (minima), during an interval dated to ca. 10 300–4500 cal BP to the south and 9000–4500 cal BP to the north. Available data suggest that these contrasting palaeohydrological patterns operated throughout the Holocene, both on millennial and centennial scales. Regarding precipitation seasonality, maximum humidity in the central Mediterranean during the middle part of the Holocene was characterised by humid winters and dry summers north of ca. 40° N, and humid winters and summers south of ca. 40° N. This may explain an apparent conflict between palaeoclimatic records depending on the proxies used for reconstruction as well as the synchronous expansion of tree species taxa with contrasting climatic requirements. In addition, south of ca. 40° N, the first millennium of the Holocene was characterised by very dry climatic conditions not only in the eastern, but also in the central- and the western Mediterranean zones as reflected by low lake levels and delayed reforestation. These results suggest that, in addition to the influence of the Nile discharge reinforced by the African monsoon, the deposition of Sapropel 1 has been favoured (1) by an increase in winter precipitation in the northern Mediterranean borderlands, and (2) by an increase in winter and summer precipitation in the southern Mediterranean area. The climate reversal following the Holocene climate optimum appears to have been punctuated by two major climate changes around 7500 and 4500 cal BP. In the central Mediterranean, the Holocene palaeohydrological changes developed in response to a combination of orbital, ice-sheet and solar forcing factors. The maximum humidity interval in the south-central Mediterranean started ca. 10 300 cal BP, in correlation with the decline (1) of the possible blocking effects of the North Atlantic anticyclone linked to maximum insolation, and/or (2) of the influence of the remnant ice sheets and fresh water forcing in the North Atlantic Ocean. In the north-central Mediterranean, the lake-level minimum interval began only around 9000 cal BP when the Fennoscandian ice sheet disappeared and a prevailing positive NAO-(North Atlantic Oscillation) type circulation developed in the North Atlantic area. The major palaeohydrological oscillation around 4500–4000 cal BP may be a non-linear response to the gradual decrease in insolation, with additional key seasonal and interhemispheric changes. On a centennial scale, the successive climatic events which punctuated the entire Holocene in the central Mediterranean coincided with cooling events associated with deglacial outbursts in the North Atlantic area and decreases in solar activity during the interval 11 700–7000 cal BP, and to a possible combination of NAO-type circulation and solar forcing since ca. 7000 cal BP onwards. Thus, regarding the centennial-scale climatic oscillations, the Mediterranean Basin appears to have been strongly linked to the North Atlantic area and affected by solar activity over the entire Holocene. In addition to model experiments, a better understanding of forcing factors and past atmospheric circulation patterns behind the Holocene palaeohydrological changes in the Mediterranean area will require further investigation to establish additional high-resolution and well-dated records in selected locations around the Mediterranean Basin and in adjacent regions. Special attention should be paid to greater precision in the reconstruction, on millennial and centennial timescales, of changes in the latitudinal location of the limit between the northern and southern palaeohydrological Mediterranean sectors, depending on (1) the intensity and/or characteristics of climatic periods/oscillations (e.g. Holocene thermal maximum versus Neoglacial, as well as, for instance, the 8.2 ka event versus the 4 ka event or the Little Ice Age); and (2) on varying geographical conditions from the western to the eastern Mediterranean areas (longitudinal gradients). Finally, on the basis of projects using strategically located study sites, there is a need to explore possible influences of other general atmospheric circulation patterns than NAO, such as the East Atlantic–West Russian or North Sea–Caspian patterns, in explaining the apparent complexity of palaeoclimatic (palaeohydrological) Holocene records from the Mediterranean area.
Resumo:
Past global climate changes had strong regional expression. To elucidate their spatio-temporal pattern, we reconstructed past temperatures for seven continental-scale regions during the past one to two millennia. The most coherent feature in nearly all of the regional temperature reconstructions is a long-term cooling trend, which ended late in the nineteenth century. At multi-decadal to centennial scales, temperature variability shows distinctly different regional patterns, with more similarity within each hemisphere than between them. There were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age, but all reconstructions show generally cold conditions between ad 1580 and 1880, punctuated in some regions by warm decades during the eighteenth century. The transition to these colder conditions occurred earlier in the Arctic, Europe and Asia than in North America or the Southern Hemisphere regions. Recent warming reversed the long-term cooling; during the period ad 1971–2000, the area-weighted average reconstructed temperature was higher than any other time in nearly 1,400 years.
Resumo:
The early phase of psychotherapy has been regarded as a sensitive period in the unfolding of psychotherapy leading to positive outcomes. However, there is disagreement about the degree to which early (especially relationship-related) session experiences predict outcome over and above initial levels of distress and early response to treatment. The goal of the present study was to simultaneously examine outcome at post treatment as a function of (a) intake symptom and interpersonal distress as well as early change in well-being and symptoms, (b) the patient's early session-experiences, (c) the therapist's early session-experiences/interventions, and (d) their interactions. The data of 430 psychotherapy completers treated by 151 therapists were analyzed using hierarchical linear models. Results indicate that early positive intra- and interpersonal session experiences as reported by patients and therapists after the sessions explained 58% of variance of a composite outcome measure, taking intake distress and early response into account. All predictors (other than problem-activating therapists' interventions) contributed to later treatment outcomes if entered as single predictors. However, the multi-predictor analyses indicated that interpersonal distress at intake as well as the early interpersonal session experiences by patients and therapists remained robust predictors of outcome. The findings underscore that early in therapy therapists (and their supervisors) need to understand and monitor multiple interconnected components simultaneously
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.
Resumo:
In this paper, we propose a fully automatic, robust approach for segmenting proximal femur in conventional X-ray images. Our method is based on hierarchical landmark detection by random forest regression, where the detection results of 22 global landmarks are used to do the spatial normalization, and the detection results of the 59 local landmarks serve as the image cue for instantiation of a statistical shape model of the proximal femur. To detect landmarks in both levels, we use multi-resolution HoG (Histogram of Oriented Gradients) as features which can achieve better accuracy and robustness. The efficacy of the present method is demonstrated by experiments conducted on 150 clinical x-ray images. It was found that the present method could achieve an average point-to-curve error of 2.0 mm and that the present method was robust to low image contrast, noise and occlusions caused by implants.
Resumo:
The north-eastern escarpment of Madagascar has been labelled a global biodiversity hotspot due to its extremely high rates of endemic species which are heavily threatened by accelerated deforestation rates and landscape change. The traditional practice of shifting cultivation or "tavy" used by the majority of land users in this area to produce subsistence rice is commonly blamed for these threats. A wide range of stakeholders ranging from conservation to development agencies, and from the private to the public sector has therefore been involved in trying to find solutions to protect the remaining forest fragments and to increase agricultural production. Consequently, provisioning, regulating and socio-cultural services of this forest-mosaic landscape are fundamentally altered leading to trade-offs between them and consequently new winners and losers amongst the stakeholders at different scales. However, despite a growing amount of evidence from case studies analysing local changes, the regional dynamics of the landscape and their contribution to such trade-offs remain poorely understood. This study therefore aims at using generalised landscape units as a base for the assessment of multi-level stakeholder claims on ecosystem services to inform negotiation, planning and decision making at a meso-scale. The presented study applies a mixed-method approach combining remote sensing, GIS and socio-economic methods to reveal current landscape dynamics, their change over time and the corresponding ecosystem service trade-offs induced by diverse stakeholder claims on the regional level. In a first step a new regional land cover classification for three points in time (1995, 2005 and 2011) was conducted including agricultural classes characteristic for shifting cultivation systems. Secondly, a novel GIS approach, termed “landscape mosaics approach” originally developed to assess dynamics of shifting cultivation landscapes in Laos was applied. Through this approach generalised landscape mosaics were generated allowing for a better understanding of changes in land use intensities instead of land cover. As a next step we will try to use these landscape units as proxies to map provisioning and regulating ecosystem services throughout the region. Through the overlay with other regional background data such as accessibility and population density and information from a region-wide stakeholder analysis, multiscale trade-offs between different services will be highlighted. The trade-offs observed on the regional scale will then be validated through a socio-economic ground-truthing within selected sites at the local scale. We propose that such meso-scale knowledge is required by all stakeholders involved in decision making towards sustainable development of north-eastern Madagascar.