924 resultados para Efficiency analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing use of fossil fuels in line with cities demographic explosion carries out to huge environmental impact in society. For mitigate these social impacts, regulatory requirements have positively influenced the environmental consciousness of society, as well as, the strategic behavior of businesses. Along with this environmental awareness, the regulatory organs have conquered and formulated new laws to control potentially polluting activities, mostly in the gas stations sector. Seeking for increasing market competitiveness, this sector needs to quickly respond to internal and external pressures, adapting to the new standards required in a strategic way to get the Green Badge . Gas stations have incorporated new strategies to attract and retain new customers whom present increasingly social demand. In the social dimension, these projects help the local economy by generating jobs and income distribution. In this survey, the present research aims to align the social, economic and environmental dimensions to set the sustainable performance indicators at Gas Stations sector in the city of Natal/RN. The Sustainable Balanced Scorecard (SBSC) framework was create with a set of indicators for mapping the production process of gas stations. This mapping aimed at identifying operational inefficiencies through multidimensional indicators. To carry out this research, was developed a system for evaluating the sustainability performance with application of Data Envelopment Analysis (DEA) through a quantitative method approach to detect system s efficiency level. In order to understand the systemic complexity, sub organizational processes were analyzed by the technique Network Data Envelopment Analysis (NDEA) figuring their micro activities to identify and diagnose the real causes of overall inefficiency. The sample size comprised 33 Gas stations and the conceptual model included 15 indicators distributed in the three dimensions of sustainability: social, environmental and economic. These three dimensions were measured by means of classical models DEA-CCR input oriented. To unify performance score of individual dimensions, was designed a unique grouping index based upon two means: arithmetic and weighted. After this, another analysis was performed to measure the four perspectives of SBSC: learning and growth, internal processes, customers, and financial, unifying, by averaging the performance scores. NDEA results showed that no company was assessed with excellence in sustainability performance. Some NDEA higher efficiency Gas Stations proved to be inefficient under certain perspectives of SBSC. In the sequence, a comparative sustainable performance and assessment analyzes among the gas station was done, enabling entrepreneurs evaluate their performance in the market competitors. Diagnoses were also obtained to support the decision making of entrepreneurs in improving the management of organizational resources and promote guidelines the regulators. Finally, the average index of sustainable performance was 69.42%, representing the efforts of the environmental suitability of the Gas station. This results point out a significant awareness of this segment, but it still needs further action to enhance sustainability in the long term

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cotton is the most abundant natural fiber in the world. Many countries are involved in the growing, importation, exportation and production of this commodity. Paper documentation claiming geographic origin is the current method employed at U.S. ports for identifying cotton sources and enforcing tariffs. Because customs documentation can be easily falsified, it is necessary to develop a robust method for authenticating or refuting the source of the cotton commodities. This work presents, for the first time, a comprehensive approach to the chemical characterization of unprocessed cotton in order to provide an independent tool to establish geographic origin. Elemental and stable isotope ratio analysis of unprocessed cotton provides a means to increase the ability to distinguish cotton in addition to any physical and morphological examinations that could be, and are currently performed. Elemental analysis has been conducted using LA-ICP-MS, LA-ICP-OES and LIBS in order to offer a direct comparison of the analytical performance of each technique and determine the utility of each technique for this purpose. Multivariate predictive modeling approaches are used to determine the potential of elemental and stable isotopic information to aide in the geographic provenancing of unprocessed cotton of both domestic and foreign origin. These approaches assess the stability of the profiles to temporal and spatial variation to determine the feasibility of this application. This dissertation also evaluates plasma conditions and ablation processes so as to improve the quality of analytical measurements made using atomic emission spectroscopy techniques. These interactions, in LIBS particularly, are assessed to determine any potential simplification of the instrumental design and method development phases. This is accomplished through the analysis of several matrices representing different physical substrates to determine the potential of adopting universal LIBS parameters for 532 nm and 1064 nm LIBS for some important operating parameters. A novel approach to evaluate both ablation processes and plasma conditions using a single measurement was developed and utilized to determine the “useful ablation efficiency” for different materials. The work presented here demonstrates the potential for an a priori prediction of some probable laser parameters important in analytical LIBS measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient’s extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bladder cancer is among the most common cancers in the UK and conventional detection techniques suffer from low sensitivity, low specificity, or both. Recent attempts to address the disparity have led to progress in the field of autofluorescence as a means to diagnose the disease with high efficiency, however there is still a lot not known about autofluorescence profiles in the disease. The multi-functional diagnostic system "LAKK-M" was used to assess autofluorescence profiles of healthy and cancerous bladder tissue to identify novel biomarkers of the disease. Statistically significant differences were observed in the optical redox ratio (a measure of tissue metabolic activity), the amplitude of endogenous porphyrins and the NADH/porphyrin ratio between tissue types. These findings could advance understanding of bladder cancer and aid in the development of new techniques for detection and surveillance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is to explore new and improved methods for greater sample introduction efficiency and enhanced analytical performance with inductively coupled plasma optical emission spectrometry (ICP-OES). Three projects are discussed in which the capabilities and applications of ICP-OES are expanded: 1. In the first project, a conventional ultrasonic nebuliser was modified to replace the heater/condenser with an infrared heated pre-evaporation tube. In continuation from previous works with pre-evaporation, the current work investigated the effects of heating with infrared block and rope heaters on two different ICP-OES instruments. Comparisons were made between several methods and setups in which temperatures were varied. By monitoring changes to sensitivity, detection limit, precision, and robustness, and analyzing two certified reference materials, a method with improved sample introduction efficiency and comparable analytical performance to a previous method was established. 2. The second project involved improvements to a previous work in which a multimode sample introduction system (MSIS) was modified by inserting a pre-evaporation tube between the MSIS and torch. The new work focused on applying an infrared heated ceramic rope for pre-evaporation. This research was conducted in all three MSIS modes (nebulisation mode, hydride generation mode, and dual mode) and on two different ICP-OES instruments, and comparisons were made between conventional setups in terms of sensitivity, detection limit, precision, and robustness. By tracking both hydride-forming and non-hydride forming elements, the effects of heating in combination with hydride generation were probed. Finally, optimal methods were validated by analysis of two certified reference materials. 3. A final project was completed in collaboration with ZincNyx Energy Solutions. This project sought to develop a method for the overall analysis of a 12 M KOH zincate fuel, which is used in green energy backup systems. By employing various techniques including flow injection analysis and standard additions, a final procedure was formulated for the verification of K concentration, as well as the measurement of additives (Al, Fe, Mg, In, Si), corrosion products (such C from CO₃²¯), and Zn particles both in and filtered from solution. Furthermore, the effects of exposing the potassium zincate electrolyte fuel to air were assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accounting for around 40% of the total final energy consumption, the building stock is an important area of focus on the way to reaching the energy goals set for the European Union. The relatively small share of new buildings makes renovation of existing buildings possibly the most feasible way of improving the overall energy performance of the building stock. This of course involves improvements on the climate shell, for example by additional insulation or change of window glazing, but also installation of new heating systems, to increase the energy efficiency and to fit the new heat load after renovation. In the choice of systems for heating, ventilation and air conditioning (HVAC), it is important to consider their performance for space heating as well as for domestic hot water (DHW), especially for a renovated house where the DHW share of the total heating consumption is larger. The present study treats the retrofitting of a generic single family house, which was defined as a reference building in a European energy renovation project. Three HVAC retrofitting options were compared from a techno-economic point of view: A) Air-to-water heat pump (AWHP) and mechanical ventilation with heat recovery (MVHR), B) Exhaust air heat pump (EAHP) with low-temperature ventilation radiators, and C) Gas boiler and ventilation with MVHR. The systems were simulated for houses with two levels of heating demand and four different locations: Stockholm, Gdansk, Stuttgart and London. They were then evaluated by means of life cycle cost (LCC) and primary energy consumption. Dynamic simulations were done in TRNSYS 17. In most cases, system C with gas boiler and MVHR was found to be the cheapest retrofitting option from a life cycle perspective. The advantage over the heat pump systems was particularly clear for a house in Germany, due to the large discrepancy between national prices of natural gas and electricity. In Sweden, where the price difference is much smaller, the heat pump systems had almost as low or even lower life cycle costs than the gas boiler system. Considering the limited availability of natural gas in Sweden, systems A and B would be the better options. From a primary energy point of view system A was the best option throughout, while system B often had the highest primary energy consumption. The limited capacity of the EAHP forced it to use more auxiliary heating than the other systems did, which lowered its COP. The AWHP managed the DHW load better due to a higher capacity, but had a lower COP than the EAHP in space heating mode. Systems A and C were notably favoured by the air heat recovery, which significantly reduced the heating demand. It was also seen that the DHW share of the total heating consumption was, as expected, larger for the house with the lower space heating demand. This confirms the supposition that it is important to include DHW in the study of HVAC systems for retrofitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.