892 resultados para Parallel computing, Virtual machine, Composition, Determinism, Abstraction
Resumo:
El presente estudio de corte descriptivo hace una revisión teórica de 68 artículos de 11 países de Latinoamérica con el fin de dar a conocer el panorama organizacional con relación a la cultura organizacional y el liderazgo en la región y cómo este ha ido evolucionando en el tiempo. La metodología utilizada se enfocó en un conteo de frecuencias usando el modelo de liderazgo y cultura organizacional de Bass y Avolio (Bass, 1999) permitiendo ordenar en tres estilos de liderazgo la información encontrada en la revisión teórica, y a su vez cada uno de los liderazgos con sus creencias, éstas tomadas como variables de la cultura organizacional. Finalmente se encontraron diferentes tipos de tendencias a nivel de los tipos de liderazgo implementados en las organizaciones y la cultura organizacional que se adopta. Se plantea la necesidad de profundizar más y de forma empírica en la temática planteada para que se conozcan las transformaciones que se han dado en el contexto organizacional y el impacto sea mayor en un futuro cercano.
Resumo:
Questo lavoro di tesi ha visto come obiettivo finale quello di realizzare una se- rie di attacchi, alcuni di questi totalmente originali, ai protocolli della famiglia Time-Sensitive Networking (TSN) attraverso lo sviluppo di un’infrastruttura virtualizzata. L’infrastruttura è stata costruita e progettata utilizzando mac- chine virtuali con Quick EMUlator (QEMU) come strato di virtualizzazione ed accelerate attraverso Kernel-based Virtual Machine (KVM). Il progetto è stato concepito come Infrastrucutre as Code (IaC), attraverso l’ausilio di Ansible e alcuni script shell utilizzati come collante per le varie parti del progetto.
Resumo:
Aims: We aimed to evaluate if the co-localisation of calcium and necrosis in intravascular ultrasound virtual histology (IVUS-VH) is due to artefact, and whether this effect can be mathematically estimated. Methods and results: We hypothesised that, in case calcium induces an artefactual coding of necrosis, any addition in calcium content would generate an artificial increment in the necrotic tissue. Stent struts were used to simulate the ""added calcium"". The change in the amount and in the spatial localisation of necrotic tissue was evaluated before and after stenting (n=17 coronary lesions) by means of a especially developed imaging software. The area of ""calcium"" increased from a median of 0.04 mm(2) at baseline to 0.76 mm(2) after stenting (p<0.01). In parallel the median necrotic content increased from 0.19 mm(2) to 0.59 mm(2) (p<0.01). The ""added"" calcium strongly predicted a proportional increase in necrosis-coded tissue in the areas surrounding the calcium-like spots (model R(2)=0.70; p<0.001). Conclusions: Artificial addition of calcium-like elements to the atherosclerotic plaque led to an increase in necrotic tissue in virtual histology that is probably artefactual. The overestimation of necrotic tissue by calcium strictly followed a linear pattern, indicating that it may be amenable to mathematical correction.
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.
Resumo:
Analogue computers provide actual rather than virtual representations of model systems. They are powerful and engaging computing machines that are cheap and simple to build. This two-part Retronics article helps you build (and understand!) your own analogue computer to simulate the Lorenz butterfly that's become iconic for Chaos theory.
Resumo:
We argüe that in order to exploit both Independent And- and Or-parallelism in Prolog programs there is advantage in recomputing some of the independent goals, as opposed to all their solutions being reused. We present an abstract model, called the Composition-Tree, for representing and-or parallelism in Prolog Programs. The Composition-tree closely mirrors sequential Prolog execution by recomputing some independent goals rather than fully re-using them. We also outline two environment representation techniques for And-Or parallel execution of full Prolog based on the Composition-tree model abstraction. We argüe that these techniques have advantages over earlier proposals for exploiting and-or parallelism in Prolog.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
110 p.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Purpose: The purpose of this study was to compare the plaque morphology between coronary and peripheral arteries using intravascular ultrasound (IVUS). Methods: IVUS was performed in 68 patients with coronary and 93 with peripheral artery lesions (29 carotid, 50 renal, and 14 iliac). Plaques were classified as fibroatheroma (VH-FA) (further subclassified as thin-capped [VH-TCFA] and thick-capped [VH-ThCFA]), fibrocalcific plaque (VH-FC) and pathological intimal thickening (VH-PIT). Results: Plaque rupture (13% of coronary, 7% of carotid, 6% of renal, and 7% of iliac arteries; P=NS) and VH-TCFA (37% of coronary, 24% of carotid, 16% of renal, and 7% of iliac arteries; P=0.02) was observed in all arteries. Compared to coronary arteries, VH-FA was less frequently observed in renal (P<0.001) and iliac arteries (P<0.006), while VH-PIT and VH-FC were prevalent in both of these peripheral arteries. Lesions with positive remodeling demonstrated more characteristics of VH-FA in coronary, carotid, and renal arteries compared to those with intermediate/negative remodeling (all P<0.01). There was positive relationship between RI and percent necrotic core area in all four arteries. Conclusions: Atherosclerotic plaque phenotypes were heterogeneous among four different arteries. In contrast, the associations of remodeling mode with plaque phenotype and composition were similar among the various arterial beds.
Resumo:
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the available resources, thus providing way for the efficient use of the available resources.