881 resultados para Exascale, Supercomputer,OFET,energy effincency, data locality, HPC
Resumo:
Freshwater Bay (FWB), Washington did not undergo significant erosion of its shoreline after the construction of the Elwha and Glines Canyon Dams, unlike the shoreline east of Angeles Point (the Elwha River’s lobate delta). In this paper I compare the wave energy density in the western and eastern ends of the Strait of Juan de Fuca with the wave energy density at the Elwha River delta. This indicates seasonal high- and low-energy regimes in the energy density data. I group multi-year surveys of four cross-shore transects in FWB along this seasonal divide and search for seasonal trends in profile on the foreshore. After documenting changes in elevation at specific datums on the foreshore, I compare digital images of one datum to determine the particle sizes that are transported during deposition and scour events on this section of the FWB foreshore. Repeat surveys of four cross-shore transects over a five-year period indicate a highly mobile slope break between the upper foreshore and the low-tide delta. Post-2011, profiles in eastern FWB record deposition in the landward portion of the low-tide terrace and also in the upper intertidal. Western FWB experiences transient deposition on the low-tide terrace and high intra-annual variability in beach profile. Profile elevation at the slope break in western FWB can vary 0.5 m in the course of weeks. Changes in surface sediment that range from sand to cobble are co-incident with these changes in elevation. High sediment mobility and profile variation are inconsistent with shoreline stability and decreased sediment from the presumed source on the Elwha River delta.
Resumo:
Rare-earth co-doping in inorganic materials has a long-held tradition of facilitating highly desirable optoelectronic properties for their application to the laser industry. This study concentrates specifically on rare-earth phosphate glasses, (R2O3)x(R'2O3)y(P2O5)1-(x+y), where (R, R') denotes (Ce, Er) or (La, Nd) co-doping and the total rare-earth composition corresponds to a range between metaphosphate, RP3O9, and ultraphosphate, RP5O14. Thereupon, the effects of rare-earth co-doping on the local structure are assessed at the atomic level. Pair-distribution function analysis of high-energy X-ray diffraction data (Qmax = 28 Å-1) is employed to make this assessment. Results reveal a stark structural invariance to rare-earth co-doping which bears testament to the open-framework and rigid nature of these glasses. A range of desirable attributes of these glasses unfold from this finding; in particular, a structural simplicity that will enable facile molecular engineering of rare-earth phosphate glasses with 'dial-up' lasing properties. When considered together with other factors, this finding also demonstrates additional prospects for these co-doped rare-earth phosphate glasses in nuclear waste storage applications. This study also reveals, for the first time, the ability to distinguish between P-O and PO bonding in these rare-earth phosphate glasses from X-ray diffraction data in a fully quantitative manner. Complementary analysis of high-energy X-ray diffraction data on single rare-earth phosphate glasses of similar rare-earth composition to the co-doped materials is also presented in this context. In a technical sense, all high-energy X-ray diffraction data on these glasses are compared with analogous low-energy diffraction data; their salient differences reveal distinct advantages of high-energy X-ray diffraction data for the study of amorphous materials. © 2013 The Owner Societies.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
the work towards increased energy efficiency. In order to plan and perform effective energy renovation of the buildings, it is necessary to have adequate information on the current status of the buildings in terms of architectural features and energy needs. Unfortunately, the official statistics do not include all of the needed information for the whole building stock. This paper aims to fill the gaps in the statistics by gathering data from studies, projects and national energy agencies, and by calibrating TRNSYS models against the existing data to complete missing energy demand data, for countries with similar climate, through simulation. The survey was limited to residential and office buildings in the EU member states (before July 2013). This work was carried out as part of the EU FP7 project iNSPiRe. The building stock survey revealed over 70% of the residential and office floor area is concentrated in the six most populated countries. The total energy consumption in the residential sector is 14 times that of the office sector. In the residential sector, single family houses represent 60% of the heated floor area, albeit with different share in the different countries, indicating that retrofit solutions cannot be focused only on multi-family houses. The simulation results indicate that residential buildings in central and southern European countries are not always heated to 20 °C, but are kept at a lower temperature during at least part of the day. Improving the energy performance of these houses through renovation could allow the occupants to increase the room temperature and improve their thermal comfort, even though the potential for energy savings would then be reduced.
Resumo:
According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.
Resumo:
In energy harvesting communications, users transmit messages using energy harvested from nature. In such systems, transmission policies of the users need to be carefully designed according to the energy arrival profiles. When the energy management policies are optimized, the resulting performance of the system depends only on the energy arrival profiles. In this dissertation, we introduce and analyze the notion of energy cooperation in energy harvesting communications where users can share a portion of their harvested energy with the other users via wireless energy transfer. This energy cooperation enables us to control and optimize the energy arrivals at users to the extent possible. In the classical setting of cooperation, users help each other in the transmission of their data by exploiting the broadcast nature of wireless communications and the resulting overheard information. In contrast to the usual notion of cooperation, which is at the signal level, energy cooperation we introduce here is at the battery energy level. In a multi-user setting, energy may be abundant in one user in which case the loss incurred by transferring it to another user may be less than the gain it yields for the other user. It is this cooperation that we explore in this dissertation for several multi-user scenarios, where energy can be transferred from one user to another through a separate wireless energy transfer unit. We first consider the offline optimal energy management problem for several basic multi-user network structures with energy harvesting transmitters and one-way wireless energy transfer. In energy harvesting transmitters, energy arrivals in time impose energy causality constraints on the transmission policies of the users. In the presence of wireless energy transfer, energy causality constraints take a new form: energy can flow in time from the past to the future for each user, and from one user to the other at each time. This requires a careful joint management of energy flow in two separate dimensions, and different management policies are required depending on how users share the common wireless medium and interact over it. In this context, we analyze several basic multi-user energy harvesting network structures with wireless energy transfer. To capture the main trade-offs and insights that arise due to wireless energy transfer, we focus our attention on simple two- and three-user communication systems, such as the relay channel, multiple access channel and the two-way channel. Next, we focus on the delay minimization problem for networks. We consider a general network topology of energy harvesting and energy cooperating nodes. Each node harvests energy from nature and all nodes may share a portion of their harvested energies with neighboring nodes through energy cooperation. We consider the joint data routing and capacity assignment problem for this setting under fixed data and energy routing topologies. We determine the joint routing of energy and data in a general multi-user scenario with data and energy transfer. Next, we consider the cooperative energy harvesting diamond channel, where the source and two relays harvest energy from nature and the physical layer is modeled as a concatenation of a broadcast and a multiple access channel. Since the broadcast channel is degraded, one of the relays has the message of the other relay. Therefore, the multiple access channel is an extended multiple access channel with common data. We determine the optimum power and rate allocation policies of the users in order to maximize the end-to-end throughput of this system. Finally, we consider the two-user cooperative multiple access channel with energy harvesting users. The users cooperate at the physical layer (data cooperation) by establishing common messages through overheard signals and then cooperatively sending them. For this channel model, we investigate the effect of intermittent data arrivals to the users. We find the optimal offline transmit power and rate allocation policy that maximize the departure region. When the users can further cooperate at the battery level (energy cooperation), we find the jointly optimal offline transmit power and rate allocation policy together with the energy transfer policy that maximize the departure region.
Resumo:
We propose a simple model for the total pp/p (p) over bar cross-section, which is a generalization of the minijet model with the inclusion of a window in the pT-spectrum associated to the saturation physics. Our model implies a natural cutoff for the perturbative calculations which modifies the energy behavior of this component, so that it satisfies the Froissart bound. Including the saturated component, we obtain a satisfactory description of the very high energy experimental data.
Resumo:
Este trabalho final de mestrado baseou-se no acompanhamento da construção do hotel SANA Amoreiras, situado na Avenida Duarte Pacheco nº 12. O empreiteiro geral foi a empresa FDO Construções. Existindo também uma equipa de fiscalização, LMSA, contratada pelo Dono de Obra de forma a efectuar todo o controlo de qualidade. O TFM baseou-se no estágio efectuado para o Dono de obra do hotel onde fui devidamente acompanhada por uma engenheira, orientadora de estágio, que acompanhou de perto o desempenho da minha actividade. Esta função teve como objectivos a gestão de projecto de hotel, interligação entre projectistas e empreiteiro geral, bem prestar o devido acompanhamento de todos trabalhos a realizar. Acompanhei a realização de dois quartos modelo onde o objectivo do Dono de Obra foi testar as diversas soluções ao nível de revestimentos, sanitários, iluminação, mobiliário, decoração e outras soluções de arquitectura, soluções estas a implementar no hotel. Acompanhei também a realização de reuniões de compatibilização e coordenação de todos os projectos de especialidades com o projecto de arquitectura, nomeadamente Avac, Instalações eléctricas, Comunicações, Segurança, Águas, Esgotos, Gás, Incêndio, Cozinhas de forma a ser possível efectuar consultas aos empreiteiros e adjudicar todas estas empreitadas. Foi efectuada uma abordagem à descrição do hotel, suas características, indicação das razões que levaram à sua construção e descrição das várias empreitadas de construção realizadas até então, tal como, demolição, escavação, estrutura, acabamentos e especialidades.
Resumo:
Nos dias que correm questões ambientais e considerações energéticas obrigam a uma inovação constante na procura de soluções de baixo consumo energético e de impacto ambiental baixo, nomeadamente no que diz respeito aos sistemas de climatização e refrigeração. Têm vindo a ser criadas medidas, tanto a nível internacional como nacional, no sentido de reduzir as emissões nocivas para a atmosfera, consequência do excessivo consumo de combustíveis fósseis, e do aumento da rentabilidade da energia e maior utilização de energias renováveis. O presente estudo de dimensionamento de um sistema de absorção, a brometo de lítio/água, com coletores solares, tem por base o sistema de compressão existente num edifício de serviços no centro de Lisboa. Foi efetuada uma analise simplificada dos seus dados de consumo energético, de maneira a verificar a viabilidade económica da substituição do equipamento existente e instalação de um sistema de coletores solares. Concluiu-se que a substituição do equipamento e a instalação de coletores solar, não é atrativa do ponto de vista económico, nesta solução em particular. Contudo, verifica-se uma considerável redução do impacto ambiental do consumo energético do edifício.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
With the emergence of low-power wireless hardware new ways of communication were needed. In order to standardize the communication between these low powered devices the Internet Engineering Task Force (IETF) released the 6LoWPAN stand- ard that acts as an additional layer for making the IPv6 link layer suitable for the lower-power and lossy networks. In the same way, IPv6 Routing Protocol for Low- Power and Lossy Networks (RPL) has been proposed by the IETF Routing Over Low power and Lossy networks (ROLL) Working Group as a standard routing protocol for IPv6 routing in low-power wireless sensor networks. The research performed in this thesis uses these technologies to implement a mobility process. Mobility management is a fundamental yet challenging area in low-power wireless networks. There are applications that require mobile nodes to exchange data with a xed infrastructure with quality-of-service guarantees. A prime example of these applications is the monitoring of patients in real-time. In these scenarios, broadcast- ing data to all access points (APs) within range may not be a valid option due to the energy consumption, data storage and complexity requirements. An alternative and e cient option is to allow mobile nodes to perform hand-o s. Hand-o mechanisms have been well studied in cellular and ad-hoc networks. However, low-power wireless networks pose a new set of challenges. On one hand, simpler radios and constrained resources ask for simpler hand-o schemes. On the other hand, the shorter coverage and higher variability of low-power links require a careful tuning of the hand-o parameters. In this work, we tackle the problem of integrating smart-HOP within a standard protocol, speci cally RPL. The simulation results in Cooja indicate that the pro- posed scheme minimizes the hand-o delay and the total network overhead. The standard RPL protocol is simply unable to provide a reliable mobility support sim- ilar to other COTS technologies. Instead, they support joining and leaving of nodes, with very low responsiveness in the existence of physical mobility.
Resumo:
Sparse matrix-vector multiplication (SMVM) is a fundamental operation in many scientific and engineering applications. In many cases sparse matrices have thousands of rows and columns where most of the entries are zero, while non-zero data is spread over the matrix. This sparsity of data locality reduces the effectiveness of data cache in general-purpose processors quite reducing their performance efficiency when compared to what is achieved with dense matrix multiplication. In this paper, we propose a parallel processing solution for SMVM in a many-core architecture. The architecture is tested with known benchmarks using a ZYNQ-7020 FPGA. The architecture is scalable in the number of core elements and limited only by the available memory bandwidth. It achieves performance efficiencies up to almost 70% and better performances than previous FPGA designs.
Resumo:
Con la mayor capacidad de los nodos de procesamiento en relación a la potencia de cómputo, cada vez más aplicaciones intensivas de datos como las aplicaciones de la bioinformática, se llevarán a ejecutar en clusters no dedicados. Los clusters no dedicados se caracterizan por su capacidad de combinar la ejecución de aplicaciones de usuarios locales con aplicaciones, científicas o comerciales, ejecutadas en paralelo. Saber qué efecto las aplicaciones con acceso intensivo a dados producen respecto a la mezcla de otro tipo (batch, interativa, SRT, etc) en los entornos no-dedicados permite el desarrollo de políticas de planificación más eficientes. Algunas de las aplicaciones intensivas de E/S se basan en el paradigma MapReduce donde los entornos que las utilizan, como Hadoop, se ocupan de la localidad de los datos, balanceo de carga de forma automática y trabajan con sistemas de archivos distribuidos. El rendimiento de Hadoop se puede mejorar sin aumentar los costos de hardware, al sintonizar varios parámetros de configuración claves para las especificaciones del cluster, para el tamaño de los datos de entrada y para el procesamiento complejo. La sincronización de estos parámetros de sincronización puede ser demasiado compleja para el usuario y/o administrador pero procura garantizar prestaciones más adecuadas. Este trabajo propone la evaluación del impacto de las aplicaciones intensivas de E/S en la planificación de trabajos en clusters no-dedicados bajo los paradigmas MPI y Mapreduce.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.