907 resultados para MDA (Model driven architecture)
Resumo:
Fixed and wireless networks are increasingly converging towards common connectivity with IP-based core networks. Providing effective end-to-end resource and QoS management in such complex heterogeneous converged network scenarios requires unified, adaptive and scalable solutions to integrate and co-ordinate diverse QoS mechanisms of different access technologies with IP-based QoS. Policy-Based Network Management (PBNM) is one approach that could be employed to address this challenge. Hence, a policy-based framework for end-to-end QoS management in converged networks, CNQF (Converged Networks QoS Management Framework) has been proposed within our project. In this paper, the CNQF architecture, a Java implementation of its prototype and experimental validation of key elements are discussed. We then present a fuzzy-based CNQF resource management approach and study the performance of our implementation with real traffic flows on an experimental testbed. The results demonstrate the efficacy of our resource-adaptive approach for practical PBNM systems
Resumo:
This paper presents the design and implementation of a measurement-based QoS and resource management framework, CNQF (Converged Networks’ QoS Management Framework). CNQF is designed to provide unified, scalable QoS control and resource management through the use of a policy-based network
management paradigm. It achieves this via distributed functional entities that are deployed to co-ordinate the resources of the transport network through centralized policy-driven decisions supported by measurement-based control architecture. We present the CNQF architecture, implementation of the
prototype and validation of various inbuilt QoS control mechanisms using real traffic flows on a Linux-based experimental test bed.
Resumo:
Policy-based management is considered an effective approach to address the challenges of resource management in large complex networks. Within the IU-ATC QoS Frameworks project, a policy-based network management framework, CNQF (Converged Networks QoS Framework) is being developed aimed at providing context-aware, end-to-end QoS control and resource management in converged next generation networks. CNQF is designed to provide homogeneous, transparent QoS control over heterogeneous access technologies by means of distributed functional entities that co-ordinate the resources of the transport network through policy-driven decisions. In this paper, we present a measurement-based evaluation of policy-driven QoS management based on CNQF architecture, with real traffic flows on an experimental testbed. A Java based implementation of the CNQF Resource Management Subsystem is deployed on the testbed and results of the experiments validate the framework operation for policy-based QoS management of real traffic flows.
Resumo:
This paper presents a framework for context-driven policy-based QoS control and end-to-end resource management in converged next generation networks. The Converged Networks QoS Framework (CNQF) is being developed within the IU-ATC project, and comprises distributed functional entities whose instances co-ordinate the converged network infrastructure to facilitate scalable and efficient end-to-end QoS management. The CNQF design leverages aspects of TISPAN, IETF and 3GPP policy-based management architectures whilst also introducing important innovative extensions to support context-aware QoS control in converged networks. The framework architecture is presented and its functionalities and operation in specific application scenarios are described.
Resumo:
Organotypic models may provide mechanistic insight into colorectal cancer (CRC) morphology. Three-dimensional (3D) colorectal gland formation is regulated by phosphatase and tensin homologue deleted on chromosome 10 (PTEN) coupling of cell division cycle 42 (cdc42) to atypical protein kinase C (aPKC). This study investigated PTEN phosphatase-dependent and phosphatase-independent morphogenic functions in 3D models and assessed translational relevance in human studies. Isogenic PTEN-expressing or PTEN-deficient 3D colorectal cultures were used. In translational studies, apical aPKC activity readout was assessed against apical membrane (AM) orientation and gland morphology in 3D models and human CRC. We found that catalytically active or inactive PTEN constructs containing an intact C2 domain enhanced cdc42 activity, whereas mutants of the C2 domain calcium binding region 3 membrane-binding loop (M-CBR3) were ineffective. The isolated PTEN C2 domain (C2) accumulated in membrane fractions, but C2 M-CBR3 remained in cytosol. Transfection of C2 but not C2 M-CBR3 rescued defective AM orientation and 3D morphogenesis of PTEN-deficient Caco-2 cultures. The signal intensity of apical phospho-aPKC correlated with that of Na/H exchanger regulatory factor-1 (NHERF-1) in the 3D model. Apical NHERF-1 intensity thus provided readout of apical aPKC activity and associated with glandular morphology in the model system and human colon. Low apical NHERF-1 intensity in CRC associated with disruption of glandular architecture, high cancer grade, and metastatic dissemination. We conclude that the membrane-binding function of the catalytically inert PTEN C2 domain influences cdc42/aPKC-dependent AM dynamics and gland formation in a highly relevant 3D CRC morphogenesis model system.
Resumo:
Background: Molecular characteristics of cancer vary between individuals. In future, most trials will require assessment of biomarkers to allocate patients into enriched populations in which targeted therapies are more likely to be effective. The MRC FOCUS3 trial is a feasibility study to assess key elements in the planning of such studies.
Patients and methods: Patients with advanced colorectal cancer were registered from 24 centres between February 2010 and April 2011. With their consent, patients' tumour samples were analysed for KRAS/BRAF oncogene mutation status and topoisomerase 1 (topo-1) immunohistochemistry. Patients were then classified into one of four molecular strata; within each strata patients were randomised to one of two hypothesis-driven experimental therapies or a common control arm (FOLFIRI chemotherapy). A 4-stage suite of patient information sheets (PISs) was developed to avoid patient overload.
Results: A total of 332 patients were registered, 244 randomised. Among randomised patients, biomarker results were provided within 10 working days (w.d.) in 71%, 15 w.d. in 91% and 20 w.d. in 99%. DNA mutation analysis was 100% concordant between two laboratories. Over 90% of participants reported excellent understanding of all aspects of the trial. In this randomised phase II setting, omission of irinotecan in the low topo-1 group was associated with increased response rate and addition of cetuximab in the KRAS, BRAF wild-type cohort was associated with longer progression-free survival.
Conclusions: Patient samples can be collected and analysed within workable time frames and with reproducible mutation results. Complex multi-arm designs are acceptable to patients with good PIS. Randomisation within each cohort provides outcome data that can inform clinical practice.
Resumo:
Physical modelling of musical instruments involves studying nonlinear interactions between parts of the instrument. These can pose several difficulties concerning the accuracy and stability of numerical algorithms. In particular, when the underlying forces are non-analytic functions of the phase-space variables, a stability proof can only be obtained in limited cases. An approach has been recently presented by the authors, leading to unconditionally stable simulations for lumped collision models. In that study, discretisation of Hamilton’s equations instead of the usual Newton’s equation of motion yields a numerical scheme that can be proven to be energy conserving. In this paper, the above approach is extended to collisions of distributed objects. Namely, the interaction of an ideal string with a flat barrier is considered. The problem is formulated within the Hamiltonian framework and subsequently discretised. The resulting nonlinearmatrix equation can be shown to possess a unique solution, that enables the update of the algorithm. Energy conservation and thus numerical stability follows in a way similar to the lumped collision model. The existence of an analytic description of this interaction allows the validation of the model’s accuracy. The proposed methodology can be used in sound synthesis applications involving musical instruments where collisions occur either in a confined (e.g. hammer-string interaction, mallet impact) or in a distributed region (e.g. string-bridge or reed-mouthpiece interaction).
Resumo:
Accretion disk winds are thought to produce many of the characteristic features seen in the spectra of active galactic nuclei (AGNs) and quasi-stellar objects (QSOs). These outflows also represent a natural form of feedback between the central supermassive black hole and its host galaxy. The mechanism for driving this mass loss remains unknown, although radiation pressure mediated by spectral lines is a leading candidate. Here, we calculate the ionization state of, and emergent spectra for, the hydrodynamic simulation of a line-driven disk wind previously presented by Proga & Kallman. To achieve this, we carry out a comprehensive Monte Carlo simulation of the radiative transfer through, and energy exchange within, the predicted outflow. We find that the wind is much more ionized than originally estimated. This is in part because it is much more difficult to shield any wind regions effectively when the outflow itself is allowed to reprocess and redirect ionizing photons. As a result, the calculated spectrum that would be observed from this particular outflow solution would not contain the ultraviolet spectral lines that are observed in many AGN/QSOs. Furthermore, the wind is so highly ionized that line driving would not actually be efficient. This does not necessarily mean that line-driven winds are not viable. However, our work does illustrate that in order to arrive at a self-consistent model of line-driven disk winds in AGN/QSO, it will be critical to include a more detailed treatment of radiative transfer and ionization in the next generation of hydrodynamic simulations.
Resumo:
An intralaminar damage model (IDM), based on continuum damage mechanics, was developed for the simulation of composite structures subjected to damaging loads. This model can capture the complex intralaminar damage mechanisms, accounting for mode interactions, and delaminations. Its development is driven by a requirement for reliable crush simulations to design composite structures with a high specific energy absorption. This IDM was implemented as a user subroutine within the commercial finite element package, Abaqus/Explicit[1]. In this paper, the validation of the IDM is presented using two test cases. Firstly, the IDM is benchmarked against published data for a blunt notched specimen under uniaxial tensile loading, comparing the failure strength as well as showing the damage. Secondly, the crush response of a set of tulip-triggered composite cylinders was obtained experimentally. The crush loading and the associated energy of the specimen is compared with the FE model prediction. These test cases show that the developed IDM is able to capture the structural response with satisfactory accuracy
Resumo:
In the digital age, the hyperspace of virtual reality systems stands out as a new spatial concept creating a parallel realm to "real" space. Virtual reality influences one’s experience of and interaction with architectural space. This "otherworld" brings up the criticism of the existing conception of space, time and body. Hyperspaces are relatively new to designers but not to filmmakers. Their cinematic representations help the comprehension of the outcomes of these new spaces. Visualisation of futuristic ideas on the big screen turns film into a medium for spatial experimentation. Creating a possible future, The Matrix (Andy and Larry Wachowski, 1999) takes the concept of hyperspace to a level not-yet-realised but imagined. With a critical gaze at the existing norms of architecture, the film creates new horizons in terms of space. In this context, this study introduces science fiction cinema as a discussion medium to understand the potentials of virtual reality systems for the architecture of the twenty first century. As a "role model" cinema helps to better understand technological and spatial shifts. It acts as a vehicle for going beyond the spatial theories and designs of the twentieth century, and defining the conception of space in contemporary architecture.
Resumo:
In this paper we present a design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation. The fundamental premise of our approach lies in the fact that all computations are not equally significant in shaping the output response of video systems. We use a statistical technique to intelligently identify these significant/not-so-significant computations at the algorithmic level and subsequently change the underlying architecture such that the significant computations are computed in an error free manner under voltage over-scaling. Furthermore, our design includes an adaptive quality compensation (AQC) block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations. Simulation results show average power savings of similar to 33% for the proposed architecture when compared to conventional implementation in the 90 nm CMOS technology. The maximum output quality loss in terms of Peak Signal to Noise Ratio (PSNR) was similar to 1 dB without incurring any throughput penalty.
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
Here, we report results of an experiment creating a transient, highly correlated carbon state using a combination of optical and x-ray lasers. Scattered x-rays reveal a highly ordered state with an electrostatic energy significantly exceeding the thermal energy of the ions. Strong Coulomb forces are predicted to induce nucleation into a crystalline ion structure within a few picoseconds. However, we observe no evidence of such phase transition after several tens of picoseconds but strong indications for an over-correlated fluid state. The experiment suggests a much slower nucleation and points to an intermediate glassy state where the ions are frozen close to their original positions in the fluid.
Resumo:
Approximate execution is a viable technique for energy-con\-strained environments, provided that applications have the mechanisms to produce outputs of the highest possible quality within the given energy budget.
We introduce a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows users to express the relative importance of computations for the quality of the end result, as well as minimum quality requirements. The significance-aware runtime system uses an application-specific analytical energy model to identify the degree of concurrency and approximation that maximizes quality while meeting user-specified energy constraints. Evaluation on a dual-socket 8-core server shows that the proposed
framework predicts the optimal configuration with high accuracy, enabling energy-constrained executions that result in significantly higher quality compared to loop perforation, a compiler approximation technique.
Resumo:
In this paper, we present a hybrid BDI-PGM framework, in which PGMs (Probabilistic Graphical Models) are incorporated into a BDI (belief-desire-intention) architecture. This work is motivated by the need to address the scalability and noisy sensing issues in SCADA (Supervisory Control And Data Acquisition) systems. Our approach uses the incorporated PGMs to model the uncertainty reasoning and decision making processes of agents situated in a stochastic environment. In particular, we use Bayesian networks to reason about an agent’s beliefs about the environment based on its sensory observations, and select optimal plans according to the utilities of actions defined in influence diagrams. This approach takes the advantage of the scalability of the BDI architecture and the uncertainty reasoning capability of PGMs. We present a prototype of the proposed approach using a transit scenario to validate its effectiveness.