1000 resultados para Guerre de Corée
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
OBJETIVO: Determinar o grau de subestimação de core biopsy, guiada por imagem, de lesões impalpáveis da mama subsequentemente submetidas à exérese cirúrgica. MÉTODOS: Foram revisados retrospectivamente 352 casos com biópsias de fragmento que foram submetidos à cirurgia entre fevereiro de 2000 e dezembro de 2005, cujo laudo histopatológico estava registrado no sistema interno de informação. Os resultados foram comparados com os da cirurgia e a taxa de subestimação foi calculada dividindo-se o número de carcinoma in situ e/ou invasivo à cirurgia pelo número de lesões de alto risco ou carcinoma in situ que foram submetidas à cirurgia. O grau de concordância entre os resultados foi obtido pelo percentual de concordância e pelo coeficiente kappa de Cohen. A associação das variáveis estudadas com a subestimação do diagnóstico foi verificada pelos testes do c2 exato de Fisher, ANOVA e Mann-Whitney U. O risco de subestimação foi medido por meio do risco relativo acompanhado dos respectivos intervalos com 95% de confiança (IC95%). RESULTADOS: Core biopsy foi inconclusiva em 15,6%. O laudo histopatológico foi benigno em 26,4%, sugestivo de lesão de alto risco em 12,8% e maligno em 45,2%. A concordância entre a core biopsy e a cirurgia foi de 82,1% (kappa=0,75). A taxa de falso negativo foi de 5,4% e a lesão foi completamente removida em 3,4%. A taxa de subestimação foi de 9,1% e esteve associada com BI-RADS® categoria 5 (p=0,01), microcalcificações (p < 0,001) e estereotaxia (p= 0,002). Todos os casos subestimados apresentavam diâmetro menor que 20 mm e em todos foram retirados pelo menos cinco fragmentos. A taxa de subestimação para lesões de alto risco foi de 31,1%, 41,2%, para hiperplasia ductal atípica, 31,2% para lesões papilíferas, 16,7% para tumor filóides e 41,9% para carcinoma ductal in situ. CONCLUSÕES: Core biopsy guiada por imagem é um procedimento confiável, contudo permanece a recomendação de ressecção cirúrgica de lesões de alto risco detectadas à biópsia de fragmento já que não foi possível estabelecer características clínicas, imaginológicas, do procedimento e patológicas que pudessem predizer subestimação e evitar a cirurgia. Amostras representativas da lesão são mais importantes que o número de fragmentos.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
An experimental apparatus for the study of core annular flows of heavy oil and water at room temperature has been set up and tested at laboratory scale. The test section consists of a 2.75 cm ID galvanized steel pipe. Tap water and a heavy oil (17.6 Pa.s; 963 kg/m³) were used. Pressure drop in a vertical upward test section was accurately measured for oil flow rates in the range 0.297 - 1.045 l/s and water flow rates ranging from 0.063 to 0.315 l/s. The oil-water input ratio was in the range 1-14. The measured pressure drop comprises gravitational and frictional parts. The gravitational pressure drop was expressed in terms of the volumetric fraction of the core, which was determined from a correlation developed by Bannwart (1998b). The existence of an optimum water-oil input ratio for each oil flow rate was observed in the range 0.07 - 0.5. The frictional pressure drop was modeled to account for both hydrodynamic and net buoyancy effects on the core. The model was adjusted to fit our data and shows excellent agreement with data from another source (Bai, 1995).
Resumo:
The superconducting gap is a basic character of a superconductor. While the cuprates and conventional phonon-mediated superconductors are characterized by distinct d- and s-wave pairing symmetries with nodal and nodeless gap distributions respectively, the superconducting gap distributions in iron-based superconductors are rather diversified. While nodeless gap distributions have been directly observed in Ba1–xKxFe2As2, BaFe2–xCoxAs2, LiFeAs, KxFe2–ySe2, and FeTe1–xSex, the signatures of a nodal superconducting gap have been reported in LaOFeP, LiFeP, FeSe, KFe2As2, BaFe2–xRuxAs2, and BaFe2(As1–xPx)2. Due to the multiplicity of the Fermi surface in these compounds s± and d pairing states can be both nodeless and nodal. A nontrivial orbital structure of the order parameter, in particular the presence of the gap nodes, leads to effects in which the disorder is much richer in dx2–y2-wave superconductors than in conventional materials. In contrast to the s-wave case, the Anderson theorem does not work, and nonmagnetic impurities exhibit a strong pair-breaking influence. In addition, a finite concentration of disorder produces a nonzero density of quasiparticle states at zero energy, which results in a considerable modification of the thermodynamic and transport properties at low temperatures. The influence of order parameter symmetry on the vortex core structure in iron-based pnictide and chalcogenide superconductors has been investigated in the framework of quasiclassical Eilenberger equations. The main results of the thesis are as follows. The vortex core characteristics, such as, cutoff parameter, ξh, and core size, ξ2, determined as the distance at which density of the vortex supercurrent reaches its maximum, are calculated in wide temperature, impurity scattering rate, and magnetic field ranges. The cutoff parameter, ξh(B; T; Г), determines the form factor of the flux-line lattice, which can be obtained in _SR, NMR, and SANS experiments. A comparison among the applied pairing symmetries is done. In contrast to s-wave systems, in dx2–y2-wave superconductors, ξh/ξc2 always increases with the scattering rate Г. Field dependence of the cutoff parameter affects strongly on the second moment of the magnetic field distributions, resulting in a significant difference with nonlocal London theory. It is found that normalized ξ2/ξc2(B/Bc2) dependence is increasing with pair-breaking impurity scattering (interband scattering for s±-wave and intraband impurity scattering for d-wave superconductors). Here, ξc2 is the Ginzburg-Landau coherence length determined from the upper critical field Bc2 = Φ0/2πξ2 c2, where Φ0 is a flux quantum. Two types of ξ2/ξc2 magnetic field dependences are obtained for s± superconductors. It has a minimum at low temperatures and small impurity scattering transforming in monotonously decreasing function at strong scattering and high temperatures. The second kind of this dependence has been also found for d-wave superconductors at intermediate and high temperatures. In contrast, impurity scattering results in decreasing of ξ2/ξc2(B/Bc2) dependence in s++ superconductors. A reasonable agreement between calculated ξh/ξc2 values and those obtained experimentally in nonstoichiometric BaFe2–xCoxAs2 (μSR) and stoichiometric LiFeAs (SANS) was found. The values of ξh/ξc2 are much less than one in case of the first compound and much more than one for the other compound. This is explained by different influence of two factors: the value of impurity scattering rate and pairing symmetry.
Resumo:
Soitinnus: lauluääni (tenori), orkesteri.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Irtokartta
Resumo:
We evaluated the accuracy of a 2nd generation ELISA to detect Helicobacter pylori infection in adults from a developing country in view of variations in sensitivity and specificity reported for different populations. We studied 97 non-consecutive patients who underwent endoscopy for evaluation of dispeptic symptoms. The presence of H. pylori was determined in antral biopsy specimens by culture, by the preformed urease test and in carbolfuchsin-stained smears. Patients were considered to be H. pylori positive if at least two of the three tests presented a positive result or if the culture was positive, and negative if the three tests were negative. Sixty-five adults (31 with peptic ulcer) were H. pylori positive and 32 adults were H. pylori negative. Antibodies were detected by Cobas Core anti-H. pylori EIA in 62 of 65 H. pylori-positive adults and in none of the negative adults. The sensitivity, specificity and positive and negative predictive values of the test were 95.4, 100, 100 and 91.4%, respectively. The Cobas Core anti-H. pylori EIA presented high sensitivity and specificity when employed for a population in Brazil, permitting the use of the test both to confirm the clinical diagnosis and to perform epidemiologic surveys.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.