819 resultados para Emerging Challenges in offshoring
Resumo:
Since the Greater Mekong Sub-region (GMS) program began in 1992, activities have expanded and flourished. The three economic corridors are composed of the East-West, North-South, and Southern; these are the most important parts of the flagship program. This article presents an evaluation of these economic corridors and their challenges in accordance with the regional distribution of population and income, population pyramids of member countries, and trade relations of member economies.
Resumo:
This paper proposes evidences for linking innovation and knowledge exchanges in developing economies towards a comprehensive theory of new economic geography in the knowledge based spatial economy. Firms which dispatched engineers to customers achieved more innovations than firms which did not. Mutual sharing of knowledge also stimulates innovations. A just-in-time relationship is effective for dealing with upgrading production process. But such strong complementarities with partners are not effective for product innovation.. These evidences support the hypothesis that face-to-face communication and complementarities among production linkages have different roles in knowledge creation.
Resumo:
This chapter attempts to identify whether product differentiation or geographical differentiation is the main source of profit for firms in developing economies by employing a simple idea from the recently developed method of empirical industrial organization. Theoretically, location choice and product choice have been considered as analogues in differentiation, but in the real world, which of these strategies is chosen will result in an immense difference in firm behavior and in the development process of the industry. Development of the technique of empirical industrial organization enabled us to identify market outcomes with endogeneity. A typical case is the market outcome with differentiation, where price or product choice is endogenously determined. Our original survey contains data on market location, differences in product types, and price. The results show that product differentiation rather than geographical differentiation mitigates pressure on price competition, but 70 per cent secures geographical monopoly.
Resumo:
Evidence suggests that incumbent parties find it harder to be re-elected in emerging than in advanced democracies because of more serious economic problems in the former. Yet the pro-Islamic Justice and Development Party (AKP) has ruled Turkey since 2002. Does economic performance sufficiently account for the electoral strength of the AKP government? Reliance on economic performance alone to gain public support makes a government vulnerable to economic fluctuations. This study includes time-series regressions for the period 1950-2011 in Turkey and demonstrates that even among Turkey's long-lasting governments, the AKP has particular electoral strength that cannot be adequately explained by economic performance.
Resumo:
Developing countries are experiencing unprecedented levels of economic growth. As a result, they will be responsible for most of the future growth in energy demand and greenhouse gas (GHG) emissions. Curbing GHG emissions in developing countries has become one of the cornerstones of a future international agreement under the United Nations Framework Convention for Climate Change (UNFCCC). However, setting caps for developing countries’ GHG emissions has encountered strong resistance in the current round of negotiations. Continued economic growth that allows poverty eradication is still the main priority for most developing countries, and caps are perceived as a constraint to future growth prospects. The development, transfer and use of low-carbon technologies have more positive connotations, and are seen as the potential path towards low-carbon development. So far, the success of the UNFCCC process in improving the levels of technology transfer (TT) to developing countries has been limited. This thesis analyses the causes for such limited success and seeks to improve on the understanding about what constitutes TT in the field of climate change, establish the factors that enable them in developing countries and determine which policies could be implemented to reinforce these factors. Despite the wide recognition of the importance of technology and knowledge transfer to developing countries in the climate change mitigation policy agenda, this issue has not received sufficient attention in academic research. Current definitions of climate change TT barely take into account the perspective of actors involved in actual climate change TT activities, while respective measurements do not bear in mind the diversity of channels through which these happen and the outputs and effects that they convey. Furthermore, the enabling factors for TT in non-BRIC (Brazil, Russia, India, China) developing countries have been seldom investigated, and policy recommendations to improve the level and quality of TTs to developing countries have not been adapted to the specific needs of highly heterogeneous countries, commonly denominated as “developing countries”. This thesis contributes to enriching the climate change TT debate from the perspective of a smaller emerging economy (Chile) and by undertaking a quantitative analysis of enabling factors for TT in a large sample of developing countries. Two methodological approaches are used to study climate change TT: comparative case study analysis and quantitative analysis. Comparative case studies analyse TT processes in ten cases based in Chile, all of which share the same economic, technological and policy frameworks, thus enabling us to draw conclusions on the enabling factors and obstacles operating in TT processes. The quantitative analysis uses three methodologies – principal component analysis, multiple regression analysis and cluster analysis – to assess the performance of developing countries in a number of enabling factors and the relationship between these factors and indicators of TT, as well as to create groups of developing countries with similar performances. The findings of this thesis are structured to provide responses to four main research questions: What constitutes technology transfer and how does it happen? Is it possible to measure technology transfer, and what are the main challenges in doing so? Which factors enable climate change technology transfer to developing countries? And how do different developing countries perform in these enabling factors, and how can differentiated policy priorities be defined accordingly? vi Resumen Los paises en desarrollo estan experimentando niveles de crecimiento economico sin precedentes. Como consecuencia, se espera que sean responsables de la mayor parte del futuro crecimiento global en demanda energetica y emisiones de Gases de Efecto de Invernadero (GEI). Reducir las emisiones de GEI en los paises en desarrollo es por tanto uno de los pilares de un futuro acuerdo internacional en el marco de la Convencion Marco de las Naciones Unidas para el Cambio Climatico (UNFCCC). La posibilidad de compromisos vinculantes de reduccion de emisiones de GEI ha sido rechazada por los paises en desarrollo, que perciben estos limites como frenos a su desarrollo economico y a su prioridad principal de erradicacion de la pobreza. El desarrollo, transferencia y uso de tecnologias bajas en carbono tiene connotaciones mas positivas y se percibe como la via hacia un crecimiento bajo en carbono. Hasta el momento, la UNFCCC ha tenido un exito limitado en la promocion de transferencias de tecnologia (TT) a paises en desarrollo. Esta tesis analiza las causas de este resultado y busca mejorar la comprension sobre que constituye transferencia de tecnologia en el area de cambio climatico, cuales son los factores que la facilitan en paises en desarrollo y que politicas podrian implementarse para reforzar dichos factores. A pesar del extendido reconocimiento sobre la importancia de la transferencia de tecnologia a paises en desarrollo en la agenda politica de cambio climatico, esta cuestion no ha sido suficientemente atendida por la investigacion existente. Las definiciones actuales de transferencia de tecnologia relacionada con la mitigacion del cambio climatico no tienen en cuenta la diversidad de canales por las que se manifiestan o los efectos que consiguen. Los factores facilitadores de TT en paises en desarrollo no BRIC (Brasil, Rusia, India y China) apenas han sido investigados, y las recomendaciones politicas para aumentar el nivel y la calidad de la TT no se han adaptado a las necesidades especificas de paises muy heterogeneos aglutinados bajo el denominado grupo de "paises en desarrollo". Esta tesis contribuye a enriquecer el debate sobre la TT de cambio climatico con la perspectiva de una economia emergente de pequeno tamano (Chile) y el analisis cuantitativo de factores que facilitan la TT en una amplia muestra de paises en desarrollo. Se utilizan dos metodologias para el estudio de la TT a paises en desarrollo: analisis comparativo de casos de estudio y analisis cuantitativo basado en metodos multivariantes. Los casos de estudio analizan procesos de TT en diez casos basados en Chile, para derivar conclusiones sobre los factores que facilitan u obstaculizan el proceso de transferencia. El analisis cuantitativo multivariante utiliza tres metodologias: regresion multiple, analisis de componentes principales y analisis cluster. Con dichas metodologias se busca analizar el posicionamiento de diversos paises en cuanto a factores que facilitan la TT; las relaciones entre dichos factores e indicadores de transferencia tecnologica; y crear grupos de paises con caracteristicas similares que podrian beneficiarse de politicas similares para la promocion de la transferencia de tecnologia. Los resultados de la tesis se estructuran en torno a cuatro preguntas de investigacion: .Que es la transferencia de tecnologia y como ocurre?; .Es posible medir la transferencia de tecnologias de bajo carbono?; .Que factores facilitan la transferencia de tecnologias de bajo carbono a paises en desarrollo? y .Como se puede agrupar a los paises en desarrollo en funcion de sus necesidades politicas para la promocion de la transferencia de tecnologias de bajo carbono?
Resumo:
This paper shows the role that some foresight tools, such as scenario design, may play in exploring the future impacts of global challenges in our contemporary Society. Additionally, it provides some clues about how to reinforce scenario design so that it displays more in-depth analysis without losing its qualitative nature and communication advantages. Since its inception in the early seventies, scenario design has become one of the most popular foresight tools used in several fields of knowledge. Nevertheless, its wide acceptance has not been seconded by the urban planning academic and professional realm. In some instances, scenario design is just perceived as a story telling technique that generates oversimplified future visions without the support of rigorous and sound analysis. As a matter of fact, the potential of scenario design for providing more in-depth analysis and for connecting with quantitative methods has been generally missed, giving arguments away to its critics. Based on these premises, this document tries to prove the capability of scenario design to anticipate the impacts of complex global challenges and to do it in a more analytical way. These assumptions are tested through a scenario design exercise which explores the future evolution of the sustainable development paradigm (SD) and its implications in the Spanish urban development model. In order to reinforce the perception of scenario design as a useful and added value instrument to urban planners, three sets of implications –functional, parametric and spatial— are displayed to provide substantial and in-depth information for policy makers. This study shows some major findings. First, it is feasible to set up a systematic approach that provides anticipatory intelligence about future disruptive events that may affect the natural environment and socioeconomic fabric of a given territory. Second, there are opportunities for innovating in the Spanish urban planning processes and city governance models. Third, as a foresight tool, scenario design can be substantially reinforced if proper efforts are made to display functional, parametric and spatial implications generated by the scenarios. Fourth, the study confirms that foresight offers interesting opportunities for urban planners, such as anticipating changes, formulating visions, fostering participation and building networks
Resumo:
One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing
Resumo:
We propose a number of challenges for future constraint programming systems, including improvements in implementation technology (using global analysis based optimization and parallelism), debugging facilities, and the extensión of the application domain to distributed, global programming. We also briefly discuss how we are exploring techniques to meet these challenges in the context of the development of the CIAO constraint logic programming system.
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
Freeform surfaces are the key of the state-of-the-art nonimaging optics to solve the challenges in concentration photovoltaics. Different families (FK, XR, FRXI) will be presented, based on the SMS 3D design method and Köhler homogenization.
Resumo:
Reducing energy consumption is one of the main challenges in most countries. For example, European Member States agreed to reduce greenhouse gas (GHG) emissions by 20% in 2020 compared to 1990 levels (EC 2008). Considering each sector separately, ICTs account nowadays for 2% of total carbon emissions. This percentage will increase as the demand of communication services and applications steps up. At the same time, the expected evolution of ICT-based developments - smart buildings, smart grids and smart transportation systems among others - could result in the creation of energy-saving opportunities leading to global emission reductions (Labouze et al. 2008), although the amount of these savings is under debate (Falch 2010). The main development required in telecommunication networks ?one of the three major blocks of energy consumption in ICTs together with data centers and consumer equipment (Sutherland 2009) ? is the evolution of existing infrastructures into ultra-broadband networks, the so-called Next Generation Networks (NGN). Fourth generation (4G) mobile communications are the technology of choice to complete -or supplement- the ubiquitous deployment of NGN. The risk and opportunities involved in NGN roll-out are currently in the forefront of the economic and policy debate. However, the issue of which is the role of energy consumption in 4G networks seems absent, despite the fact that the economic impact of energy consumption arises as a key element in the cost analysis of this type of networks. Precisely, the aim of this research is to provide deeper insight on the energy consumption involved in the usage of a 4G network, its relationship with network main design features, and the general economic impact this would have in the capital and operational expenditures related with network deployment and usage.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
One of the biggest challenges in speech synthesis is the production of naturally sounding synthetic voices. This means that the resulting voice must be not only of high enough quality but also that it must be able to capture the natural expressiveness imbued in human speech. This paper focus on solving the expressiveness problem by proposing a set of different techniques that could be used for extrapolating the expressiveness of proven high quality speaking style models into neutral speakers in HMM-based synthesis. As an additional advantage, the proposed techniques are based on adaptation approaches, which means that they can be used with little training data (around 15 minutes of training data are used in each style for this paper). For the final implementation, a set of 4 speaking styles were considered: news broadcasts, live sports commentary, interviews and parliamentary speech. Finally, the implementation of the 5 techniques were tested through a perceptual evaluation that proves that the deviations between neutral and speaking style average models can be learned and used to imbue expressiveness into target neutral speakers as intended.
Resumo:
In hostile environments at CERN and other similar scientific facilities, having a reliable mobile robot system is essential for successful execution of robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs out of energy. Because of environmental constraints, such mobile robots are usually battery-powered and hence energy management and optimization is one of the key challenges in this field. The ability to know beforehand the energy consumed by various elements of the robot (such as locomotion, sensors, controllers, computers and communication) will allow flexibility in planning or managing the tasks to be performed by the robot.
Resumo:
En los últimos años, y a la luz de los retos a los que se enfrenta la sociedad, algunas voces están urgiendo a dejar atrás los paradigmas modernos —eficiencia y rendimiento— que sustentan a las llamadas prácticas sostenibles, y están alentando a repensar, en el contexto de los cambios científicos y culturales, una agenda termodinámica y ecológica para la arquitectura. La cartografía que presenta esta tesis doctoral se debe de entender en este contexto. Alineándose con esta necesidad, se esfuerza por dar a este empeño la profundidad histórica de la que carece. De este modo, el esfuerzo por dotar a la arquitectura de una agenda de base científica, se refuerza con una discusión cultural sobre el progresivo empoderamiento de las ideas termodinámicas en la arquitectura. Esta cartografía explora la historia de las ideas termodinámicas en la arquitectura desde el principio del siglo XX hasta la actualidad. Estudia, con el paso de los sistemas en equilibrio a los alejados del equilibrio como trasfondo, como las ideas termodinámicas han ido infiltrándose gradualmente en la arquitectura. Este esfuerzo se ha planteado desde un doble objetivo. Primero, adquirir una distancia crítica respecto de las prácticas modernas, de modo que se refuerce y recalibre el armazón intelectual y las herramientas sobre las que se está apoyando esta proyecto termodinámico. Y segundo, desarrollar una aproximación proyectual sobre la que se pueda fundamentar una agenda termodinámica para la arquitectura, asunto que se aborda desde la firme creencia de que es posible una re-descripción crítica de la realidad. De acuerdo con intercambios de energía que se dan alrededor y a través de un edificio, esta cartografía se ha estructurado en tres entornos termodinámicos, que sintetizan mediante un corte transversal la variedad de intercambios de energía que se dan en la arquitectura: -Cualquier edificio, como constructo espacial y material inmerso en el medio, intercambia energía mediante un flujo bidireccional con su contexto, definiendo un primer entorno termodinámico al que se denomina atmósferas territoriales. -En el interior de los edificios, los flujos termodinámicos entre la arquitectura y su ambiente interior definen un segundo entorno termodinámico, atmósferas materiales, que explora las interacciones entre los sistemas materiales y la atmósfera interior. -El tercer entorno termodinámico, atmosferas fisiológicas, explora los intercambios de energía que se dan entre el cuerpo humano y el ambiente invisible que lo envuelve, desplazando el objeto de la arquitectura desde el marco físico hacia la interacción entre la atmósfera y los procesos somáticos y percepciones neurobiológicas de los usuarios. A través de estos tres entornos termodinámicos, esta cartografía mapea aquellos patrones climáticos que son relevantes para la arquitectura, definiendo tres situaciones espaciales y temporales sobre las que los arquitectos deben actuar. Estudiando las conexiones entre la atmósfera, la energía y la arquitectura, este mapa presenta un conjunto de ideas termodinámicas disponibles —desde los parámetros de confort definidos por la industria del aire acondicionado hasta las técnicas de acondicionamiento pasivo— que, para ser efectivas, necesitan ser evaluadas, sintetizadas y recombinadas a la luz de los retos de nuestro tiempo. El resultado es un manual que, mediando entre la arquitectura y la ciencia, y a través de este relato histórico, acorta la distancia entre la arquitectura y la termodinámica, preparando el terreno para la definición de una agenda termodinámica para el proyecto de arquitectura. A este respecto, este mapa se entiende como uno de los pasos necesarios para que la arquitectura recupere la capacidad de intervenir en la acuciante realidad a la que se enfrenta. ABSTRACT During the last five years, in the light of current challenges, several voices are urging to leave behind the modern energy paradigms —efficiency and performance— on which the so called sustainable practices are relying, and are posing the need to rethink, in the light of the scientific and cultural shifts, the thermodynamic and ecological models for architecture. The historical cartography this PhD dissertation presents aligns with this effort, providing the cultural background that this endeavor requires. The drive to ground architecture on a scientific basis needs to be complemented with a cultural discussion of the history of thermodynamic ideas in architecture. This cartography explores the history of thermodynamic ideas in architecture, from the turn of the 20th century until present day, focusing on the energy interactions between architecture and atmosphere. It surveys the evolution of thermodynamic ideas —the passage from equilibrium to far from equilibrium thermodynamics— and how these have gradually empowered within design and building practices. In doing so, it has posed a double-objective: first, to acquire a critical distance with modern practices which strengthens and recalibrates the intellectual framework and the tools in which contemporary architectural endeavors are unfolding; and second, to develop a projective approach for the development a thermodynamic agenda for architecture and atmosphere, with the firm belief that a critical re-imagination of reality is possible. According to the different systems which exchange energy across a building, the cartography has been structured in three particular thermodynamic environments, providing a synthetic cross-section of the range of thermodynamic exchanges which take place in architecture: -Buildings, as spatial and material constructs immersed in the environment, are subject to a contiuous bidirectional flow of energy with its context, defining a the first thermodynamic environment called territorial atmospheres. -Inside buildings, the thermodynamic flow between architecture and its indoor ambient defines a second thermodynamic environment, material atmospheres, which explores the energy interactions between the indoor atmosphere and its material systems. -The third thermodynamic environment, physiological atmospheres, explores the energy exchanges between the human body and the invisible environment which envelopes it, shifting design drivers from building to the interaction between the atmosphere and the somatic processes and neurobiological perceptions of users. Through these three thermodynamic environments, this cartography maps those climatic patterns which pertain to architecture, providing three situations on which designers need to take stock. Studying the connections between atmosphere, energy and architecture this map presents, not a historical paradigm shift from mechanical climate control to bioclimatic passive techniques, but a range of available thermodynamic ideas which need to be assessed, synthesized and recombined in the light of the emerging challenges of our time. The result is a manual which, mediating between architecture and science, and through this particular historical account, bridges the gap between architecture and thermodynamics, paving the way to a renewed approach to atmosphere, energy and architecture. In this regard this cartography is understood as one of the necessary steps to recuperate architecture’s lost capacity to intervene in the pressing reality of contemporary societies.