928 resultados para 3-LEVEL SYSTEMS
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
Research has looked at single rather than a configuration of human resource management (HRM) practices to influence creativity so it is not yet clear how these practices synergistically facilitate creativity and organisational performance. I address this significant but unanswered question in a three-part study. In Study 1, I develop a high performance work system (HPWS) for creativity scale. I use Study 2 sample to test the validity of the new scale. In Study 3, I test a multilevel model of the intervening processes through which branch HPWS for creativity influences creativity and branch performance. Specifically, at the branch level, I draw on social context theory and hypothesise that branch HPWS for creativity relates to climate for creativity which, in turn, leads to creativity, and ultimately, to profit. Furthermore, I hypothesise environmental dynamism as a boundary condition of the creativity-profit relationship. At the individual level, I hypothesise a cross-level effect of branch HPWS for creativity on employee-perceived HPWS. I draw on self-determination theory and argue that perceived HPWS for creativity relate to need satisfaction and the psychological pathways of intrinsic motivation and creative process engagement to predict creativity. I also hypothesise climate for creativity as a cross-level moderator of the intrinsic motivation-creativity and creative process engagement-creativity relationships. Results of hierarchical linear modeling (HLM) indicate that ten out of the fifteen hypotheses were supported. The findings of this study respond to calls for HPWS to be designed around a strategic focus by developing and providing initial validity evidence of an HPWS for creativity scale. The results reveal the underlying mechanisms through which HPWS for creativity simultaneously influences individual and branch creativity leading to profit. Lastly, results indicate environmental dynamism to be an important boundary condition of the creativity-profit relationship and climate for creativity as a cross-level moderator of the creative process engagement-creativity.
Resumo:
We present a comparative study of the influence of dispersion induced phase noise for n-level PSK systems. From the analysis, we conclude that the phase noise influence for classical homodyne/heterodyne PSK systems is entirely determined by the modulation complexity (expressed in terms of constellation diagram) and the analogue demodulation format. On the other hand, the use of digital signal processing (DSP) in homodyne/intradyne systems renders a fiber length dependence originating from the generation of equalization enhanced phase noise. For future high capacity systems, high constellations must be used in order to lower the symbol rate to practically manageable speeds, and this fact puts severe requirements to the signal and local oscillator (LO) linewidths. Our results for the bit-error-rate (BER) floor caused by the phase noise influence in the case of QPSK, 16PSK and 64PSK systems outline tolerance limitations for the LO performance: 5 MHz linewidth (at 3-dB level) for 100 Gbit/s QPSK; 1 MHz for 400 Gbit/s QPSK; 0.1 MHz for 400 Gbit/s 16PSK and 1 Tbit/s 64PSK systems. This defines design constrains for the phase noise impact in distributed-feed-back (DFB) or distributed-Bragg-reflector (DBR) semiconductor lasers, that would allow moving the system capacity from 100 Gbit/s system capacity to 400 Gbit/s in 3 years (1 Tbit/s in 5 years). It is imperative at the same time to increase the analogue to digital conversion (ADC) speed such that the single quadrature symbol rate goes from today's 25 GS/s to 100 GS/s (using two samples per symbol). © 2014 by Walter de Gruyter Berlin/Boston.
Resumo:
This letter proposes the introduction of discrete modal crosstalk (XT) through fiber splices for the improvement of the distance reach (DR) of mode division multiplexed (MDM) transmission systems over few mode fibers (FMFs). The proposed method increases the DR, reducing the time spread of the FMFs' impulse response. The effectiveness of this method is assessed through simulation considering 3 × 136-Gbit/s MDM-coherently-detected polarization-multiplexed quadrature-phase-shift-keying ultralong haul transmission systems employing inherently low differential mode delay (DMD) FMFs or DMD compensated FMFs. A maximum DR increase factor of 1.9 is obtained for the optimum number of splices per span and optimum splice XT level. © 1989-2012 IEEE.
Resumo:
Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.
Resumo:
ACM Computing Classification System (1998): K.3.1, K.3.2.
Resumo:
Цветомир Цачев - В настоящия доклад се прави преглед на някои резултати от областта на оптималното управление на непрекъснатите хетерогенни системи, публикувани в периодичната научна литература в последните години. Една динамична система се нарича хетерогенна, ако всеки от нейните елементи има собствена динамиката. Тук разглеждаме оптимално управление на системи, чиято хетерогенност се описва с едномерен или двумерен параметър – на всяка стойност на параметъра отговаря съответен елемент на системата. Хетерогенните динамични системи се използват за моделиране на процеси в икономиката, епидемиологията, биологията, опазване на обществената сигурност (ограничаване на използването на наркотици) и др. Тук разглеждаме модел на оптимално инвестиране в образование на макроикономическо ниво [11], на ограничаване на последствията от разпространението на СПИН [9], на пазар на права за въглеродни емисии [3, 4] и на оптимален макроикономически растеж при повишаване на нивото на върховите технологии [1]. Ключови думи: оптимално управление, непрекъснати хетерогенни динамични системи, приложения в икономиката и епидемиолегията
Resumo:
Adaptability for distributed object-oriented enterprise frameworks in multimedia technology is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing systems. In this paper, we propose a Metalevel Component-Based Framework which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our approach of combining a meta-architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed multimedia applications. The proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address issues in the domain of distributed computing and they can be woven together to shape the framework in future. © 2011 Springer Science+Business Media B.V.
Resumo:
We conducted a low-level phosphorus (P) enrichment study in two oligotrophic freshwater wetland communities (wet prairies [WP] and sawgrass marsh [SAW]) of the neotropical Florida Everglades. The experiment included three P addition levels (0, 3.33, and 33.3 mg P m−2 month−1), added over 2 years, and used in situ mesocosms located in northeastern Everglades National Park, Fla., USA. The calcareous periphyton mat in both communities degraded quickly and was replaced by green algae. In the WP community, we observed significant increases in net aboveground primary production (NAPP) and belowground biomass. Aboveground live standing crop (ALSC) did not show a treatment effect, though, because stem turnover rates of Eleocharis spp., the dominant emergent macrophyte in this community, increased significantly. Eleocharis spp. leaf tissue P content decreased with P additions, causing higher C:P and N:P ratios in enriched versus unenriched plots. In the SAW community, NAPP, ALSC, and belowground biomass all increased significantly in response to P additions. Cladium jamaicense leaf turnover rates and tissue nutrient content did not show treatment effects. The two oligotrophic communities responded differentially to P enrichment. Periphyton which was more abundant in the WP community, appeared to act as a P buffer that delayed the response of other ecosystem components until after the periphyton mat had disappeared. Periphyton played a smaller role in controlling ecosystem dynamics and community structure in the SAW community. Our data suggested a reduced reliance on internal stores of P by emergent macrophytes in the WP that were exposed to P enrichment. Eleocharis spp. rapidly recycled P through more rapid aboveground turnover. In contrast, C. jamaicense stored added P by initially investing in belowground biomass, then shifting growth allocation to aboveground tissue without increasing leaf turnover rates. Our results suggest that calcareous wetland systems throughout the Caribbean, and oligotrophic ecosystems in general, respond rapidly to low-level additions of their limiting nutrient.
Resumo:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Glycogen Synthase Kinase 3 (GSK3), a serine/threonine kinase initially characterized in the context of glycogen metabolism, has been repeatedly realized as a multitasking protein that can regulate numerous cellular events in both metazoa and protozoa. I recently found GSK3 plays a role in regulating chemotaxis, a guided cell movement in response to an external chemical gradient, in one of the best studied model systems for chemotaxis - Dictyostelium discoideum. ^ It was initially found that comparing to wild type cells, gsk3 - cells showed aberrant chemotaxis with a significant decrease in both speed and chemotactic indices. In Dictyostelium, phosphatidylinositol 3,4,5-triphosphate (PIP3) signaling is one of the best characterized pathways that regulate chemotaxis. Molecular analysis uncovered that gsk3- cells suffer from high basal level of PIP3, the product of PI3K. Upon chemoattractant cAMP stimulation, wild type cells displayed a transient increase in the level of PIP3. In contrast, gsk3- cells exhibited neither significant increase nor adaptation. On the other hand, no aberrant dynamic of phosphatase and tensin homolog (PTEN), which antagonizes PI3K function, was observed. Upon membrane localization of PI3K, PI3K become activated by Ras, which will in turn further facilitate membrane localization of PI3K in an F-Actin dependent manner. The gsk3- cells treated with F-Actin inhibitor Latrunculin-A showed no significant difference in the PIP3 level. ^ I also showed GSK3 affected the phosphorylation level of the localization domain of PI3K1 (PI3K1-LD). PI3K1-LD proteins from gsk3- cells displayed less phosphorylation on serine residues compared to that from wild type cells. When the potential GSK3 phosphorylation sites of PI3K1-LD were substituted with aspartic acids (Phosphomimetic substitution), its membrane localization was suppressed in gsk3- cells. When these serine residues of PI3K1-LD were substituted with alanine, aberrantly high level of membrane localization of the PI3K1-LD was monitored in wild type cells. Wild type, phosphomimetic, and alanine substitution of PI3K1-LD fused with GFP proteins also displayed identical localization behavior as suggested by the cell fraction studies. Lastly, I identified that all three potential GSK3 phosphorylation sites on PI3K1-LD could be phosphorylated in vitro by GSK3.^
Resumo:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.