46 resultados para Core Domain
Resumo:
A small carbonatite dyke swarm has been identified at Naantali, southwest Finland. Several swarms of shoshonitic lamprophyres are also known along the Archean-Proterozoic boundary in eastern Finland and northwest Russia. These intrusions, along with the carbonatite intrusion at Halpanen, eastern Finland, represent a stage of widespread low-volume mantle-sourced alkaline magmatism in the Svecofennian Domain. Using trace element and isotope geochemistry coupled with precise geochronology from these rocks, a model is presented for the Proterozoic metasomatic evolution of the Fennoscandian subcontinental lithospheric mantle. At ~2.2-2.06 Ga, increased biological production in shallow seas linked to continental rifting, resulted in increased burial rates of organic carbon. Subduction between ~1.93-1.88 Ga returned organic carbon-enriched sediments of mixed Archean and Proterozoic provenance to the mantle. Dehydration reactions supplied water to the mantle wedge, driving arc volcanism, while mica, amphibole and carbonate were brought deeper into the mantle with the subducting slab. The cold subducted slab was heated conductively from the surrounding warm mantle, while pressures continued to gradually increase as a result of crustal thickening. The sediments began to melt in a two stage process, first producing a hydrous alkaline silicate melt, which infiltrated the mantle wedge and crystallised as metasomatic veins. At higher temperatures, carbonatite melt was produced, which preferentially infiltrated the pre-existing metasomatic vein network. At the onset of post-collisional extension, deep fault structures formed, providing conduits for mantle melts to reach the upper crust. Low-volume partial melting of the enriched mantle at depths of at least 110 km led to the formation of first carbonatitic magma and subsequently lamprophyric magma. Carbonatite was emplaced in the upper crust at Naantali at 1795.7 ± 6.8 Ma; lamprophyres along the Archean-Proterozoic boundary were emplaced between 1790.1 ± 3.3 Ma and 1781 ± 20 Ma.
Resumo:
The aim of the study was to create an easily upgradable product costing model for laser welded hollow core steel panels to help in pricing decisions. The theory section includes a literature review to identify traditional and modern cost accounting methodologies, which are used by manufacturing companies. The theory section also presents the basics of steel panel structures and their manufacturing methods and manufacturing costs based on previous research. Activity-Based costing turned out to be the most appropriate methodology for the costing model because of wide product variations. Activity analysis and the determination of cost drivers based on observations and interviews were the key steps in the creation of the model. The created model was used to test how panel parameters affect the costs caused by the main manufacturing stages and materials. By comparing cost structures, it was possible to find the panel types that are the most economic and uneconomic to manufacture. A sensitivity analysis proved that the model gives sufficiently reliable cost information to support pricing decisions. More reliable cost information could be achieved by determining the cost drivers more accurately. Alternative methods for manufacturing the cores were compared with the model. The comparison proved that roll forming can be more advantageous and flexible than press brake bending. However, more extensive research showed that roll forming is possible only when the cores are designed to be manufactured by roll forming. Due to that fact, when new panels are designed consideration should be given to the possibility of using roll forming.
Resumo:
Lipopolysacharide (LPS) present on the outer leaflet of Gram-negative bacteria is important for the adaptation of the bacteria to the environment. Structurally, LPS can be divided into three parts: lipid A, core and O-polysaccharide (OPS). OPS is the outermost and also the most diverse moiety. When OPS is composed of identical sugar residues it is called homopolymeric and when it is composed of repeating units of oligosaccharides it is called heteropolymeric. Bacteria synthesize LPS at the inner membrane via two separate pathways, Lipid A-core via one and OPS via the other. These are ligated together in the periplasmic space and the completed LPS molecule is translocated to the surface of the bacteria. The genes directing the OPS biosynthesis are often clustered and the clusters directing the biosynthesis of heteropolymeric OPS often contain genes for i) the biosynthesis of required NDP-sugar precursors, ii) glycosyltransferases needed to build up the repeating unit, iii) translocation of the completed O-unit to the periplasmic side of the inner membrane (flippase) and iv) polymerization of the repeating units to complete OPS. The aim of this thesis was to characterize the biosynthesis of the outer core (OC) of Yersinia enterocolitica serotype O:3 (YeO3). Y. enterocolitica is a member of the Gram-negative Yersinia genus and it causes diarrhea followed sometimes by reactive arthritis. The chemical structure of the OC and the nucleotide sequence of the gene cluster directing its biosynthesis were already known; however, no experimental evidence had been provided for the predicted functions of the gene products. The hypothesis was that the OC biosynthesis would follow the pathway described for heteropolymeric OPS, i.e. a Wzy-dependent pathway. In this work the biochemical activities of two enzymes involved in the NDP-sugar biosynthesis was established. Gne was determined to be a UDP-N-acetylglucosamine-4-epimerase catalyzing the conversion of UDP-GlcNAc to UDP-GalNAc and WbcP was shown to be a UDP-GlcNAc- 4,6-dehydratase catalyzing the reaction that converts UDP-GlcNAc to a rare UDP-2-acetamido- 2,6-dideoxy-d-xylo-hex-4-ulopyranose (UDP-Sugp). In this work, the linkage specificities and the order in which the different glycosyltransferases build up the OC onto the lipid carrier were also investigated. In addition, by using a site-directed mutagenesis approach the catalytically important amino acids of Gne and two of the characterized glycosyltranferases were identified. Also evidence to show the enzymes involved in the ligations of OC and OPS to the lipid A inner core was provided. The importance of the OC to the physiology of Y. enterocolitica O:3 was defined by determining the minimum requirements for the OC to be recognized by a bacteriophage, bacteriocin and monoclonal antibody. The biological importance of the rare keto sugar (Sugp) was also shown. As a conclusion this work provides an extensive overview of the biosynthesis of YeO3 OC as it provides a substantial amount of information of the stepwise and coordinated synthesis of the Ye O:3 OC hexasaccharide and detailed information of its properties as a receptor.
Resumo:
Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Resumo:
The purpose of this thesis is to study organizational core values and their application in practice. With the help of literature, the thesis discusses the implementation of core values and the benefits that companies can gain by doing it successfully. Also, ways in which companies can improve their values’ application to their everyday work are presented. The case company’s value implementation is evaluated through a survey research conducted on their employees. The true power of values lies in their application, and therefore, core values should be the basis for all organizational behavior, integrated into everything a company does. Applying values in practice is an ongoing process and companies should continuously work towards creating a more value-based organizational culture. If a company does this effectively, they will most likely become more successful with stakeholders as well as financially. Companies looking to turn their values into actions should start with a self-assessment. Employee surveys are effective in assessing the current level of value implementation, since employees have valuable, first-hand information regarding the situations and behaviors they face in their everyday work. After the self-assessment, things like management commitment, communication, training, and support are key success factors in value implementation.
Resumo:
Resonance energy transfer (RET) is a non-radiative transfer of the excitation energy from the initially excited luminescent donor to an acceptor. The requirements for the resonance energy transfer are: i) the spectral overlap between the donor emission spectrum and the acceptor absorption spectrum, ii) the close proximity of the donor and the acceptor, and iii) the suitable relative orientations of the donor emission and the acceptor absorption transition dipoles. As a result of the RET process the donor luminescence intensity and the donor lifetime are decreased. If the acceptor is luminescent, a sensitized acceptor emission appears. The rate of RET depends strongly on the donor–acceptor distance (r) and is inversely proportional to r6. The distance dependence of RET is utilized in binding assays. The proximity requirement and the selective detection of the RET-modified emission signal allow homogeneous separation free assays. The term lanthanide-based RET is used when luminescent lanthanide compounds are used as donors. The long luminescence lifetimes, the large Stokes’ shifts and the intense, sharply-spiked emission spectra of the lanthanide donors offer advantages over the conventional organic donor molecules. Both the organic lanthanide chelates and the inorganic up-converting phosphor (UCP) particles have been used as donor labels in the RET based binding assays. In the present work lanthanide luminescence and lanthanide-based resonance energy transfer phenomena were studied. Luminescence lifetime measurements had an essential role in the research. Modular frequency-domain and time-domain luminometers were assembled and used successfully in the lifetime measurements. The frequency-domain luminometer operated in the low frequency domain ( 100 kHz) and utilized a novel dual-phase lock-in detection of the luminescence. One of the studied phenomena was the recently discovered non-overlapping fluorescence resonance energy transfer (nFRET). The studied properties were the distance and temperature dependences of nFRET. The distance dependence was found to deviate from the Förster theory and a clear temperature dependence was observed whereas conventional RET was completely independent of the temperature. Based on the experimental results two thermally activated mechanisms were proposed for the nFRET process. The work with the UCP particles involved the measurement of the luminescence properties of the UCP particles synthesized in our laboratory. The goal of the UCP particle research is to develop UCP donor labels for binding assays. In the present work the effect of the dopant concentrations and the core–shell structure on the total up-conversion luminescence intensity, the red–green emission ratio, and the luminescence lifetime was studied. Also the non-radiative nature of the energy transfer from the UCP particle donors to organic acceptors was demonstrated for the first time in aqueous environment and with a controlled donor–acceptor distance.
Resumo:
This study focuses on work commitment creation on rhetorical level, that is to say, the rhetorical and linguistic means that are used to construct or elicit worker commitment. The commitment of the worker is one of the most important objectives of all business communication. There is a strong demand for commitment, identification, or adherence to work in various walks of life, although the actual circumstances are often somewhat insecure and shortsighted. The analysis demonstrates that the actual object of commitment may vary from work itself or work organization to one’s career or professional development. The ideal pattern for commitment appears as comprehensive: it contains affective and rational as well as ideological dimensions. This thesis is a rhetorical discourse analysis, or rhetorical analysis with discourse-analytic influences. Primarily it is a rhetorical analysis in which discourses are observed mainly as tools of a rhetorician. The study also draws on various findings of sociology of work and organizational studies. Research material consists of magazines from three and web pages from six different companies. This study explores repeated discourses in commitment rhetoric, mainly through pointing core concepts and recurrent patterns of argumentation. In this analysis section, a semantic and concept-analytic approach is also employed. Companies talk about ideas, values, feelings and attitudes thus constructing a united and unanimous group and an ideal model of commitment. Probably the most important domain of commitment rhetoric is the construction of group and community. Collective identity is constructed through shared meanings, values and goals, and these rhetorical group constructs that can be used and modified in various ways. Every now and then business communication also focuses on the individual, employing different speakers, positions and discourses associated to them. Constructing and using these positions also paints the picture of an ideal worker and ideal work orientation. For example, the so called entrepreneurship model is frequently used here. Commitment talk and the rhetorical situation it constructs are full of tensions and contradictions; the presence of seemingly contradictory values, goals or identities is constant. This study demonstrates tensions like self-fulfilment and individuality versus conformity, and constant change and development versus dependable establishment, and analyses how they are used, processed and dealt with. An important dimension in commitment rhetoric is the way companies define themselves in respect of current social issues, and how they define themselves as responsible social actors, and how they, in this sense, seek to appear as attractive workplaces. This point of view gives rise to problematic questions as companies process the tensions between, for example, rhetoric and action, ethical ideals and business conditions and so on. For its part, the commitment talk also defines the meaning of waged work in human life. Changing society, changing working life, and changing business environments set new claims and standards for workers and contents of work. In this point of view this research contributes to the study of working life and takes part in current public discussion concerning the meaning, role and future of waged work.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.
Resumo:
The superconducting gap is a basic character of a superconductor. While the cuprates and conventional phonon-mediated superconductors are characterized by distinct d- and s-wave pairing symmetries with nodal and nodeless gap distributions respectively, the superconducting gap distributions in iron-based superconductors are rather diversified. While nodeless gap distributions have been directly observed in Ba1–xKxFe2As2, BaFe2–xCoxAs2, LiFeAs, KxFe2–ySe2, and FeTe1–xSex, the signatures of a nodal superconducting gap have been reported in LaOFeP, LiFeP, FeSe, KFe2As2, BaFe2–xRuxAs2, and BaFe2(As1–xPx)2. Due to the multiplicity of the Fermi surface in these compounds s± and d pairing states can be both nodeless and nodal. A nontrivial orbital structure of the order parameter, in particular the presence of the gap nodes, leads to effects in which the disorder is much richer in dx2–y2-wave superconductors than in conventional materials. In contrast to the s-wave case, the Anderson theorem does not work, and nonmagnetic impurities exhibit a strong pair-breaking influence. In addition, a finite concentration of disorder produces a nonzero density of quasiparticle states at zero energy, which results in a considerable modification of the thermodynamic and transport properties at low temperatures. The influence of order parameter symmetry on the vortex core structure in iron-based pnictide and chalcogenide superconductors has been investigated in the framework of quasiclassical Eilenberger equations. The main results of the thesis are as follows. The vortex core characteristics, such as, cutoff parameter, ξh, and core size, ξ2, determined as the distance at which density of the vortex supercurrent reaches its maximum, are calculated in wide temperature, impurity scattering rate, and magnetic field ranges. The cutoff parameter, ξh(B; T; Г), determines the form factor of the flux-line lattice, which can be obtained in _SR, NMR, and SANS experiments. A comparison among the applied pairing symmetries is done. In contrast to s-wave systems, in dx2–y2-wave superconductors, ξh/ξc2 always increases with the scattering rate Г. Field dependence of the cutoff parameter affects strongly on the second moment of the magnetic field distributions, resulting in a significant difference with nonlocal London theory. It is found that normalized ξ2/ξc2(B/Bc2) dependence is increasing with pair-breaking impurity scattering (interband scattering for s±-wave and intraband impurity scattering for d-wave superconductors). Here, ξc2 is the Ginzburg-Landau coherence length determined from the upper critical field Bc2 = Φ0/2πξ2 c2, where Φ0 is a flux quantum. Two types of ξ2/ξc2 magnetic field dependences are obtained for s± superconductors. It has a minimum at low temperatures and small impurity scattering transforming in monotonously decreasing function at strong scattering and high temperatures. The second kind of this dependence has been also found for d-wave superconductors at intermediate and high temperatures. In contrast, impurity scattering results in decreasing of ξ2/ξc2(B/Bc2) dependence in s++ superconductors. A reasonable agreement between calculated ξh/ξc2 values and those obtained experimentally in nonstoichiometric BaFe2–xCoxAs2 (μSR) and stoichiometric LiFeAs (SANS) was found. The values of ξh/ξc2 are much less than one in case of the first compound and much more than one for the other compound. This is explained by different influence of two factors: the value of impurity scattering rate and pairing symmetry.
Resumo:
Soitinnus: lauluääni (tenori), orkesteri.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.