954 resultados para CBIR, descrittori, performance, indicator
Resumo:
It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.
Resumo:
This paper prese nts the validation of the Performance Indicator System for Projects under Construction - SIDECC. The goal was to develop a system of performance indicators from the macroergonômica approach, con sidering criteria of usefulness , practicality and applicabilit y and the concept of continuous improveme nt in the construction industry . The validation process SIDECC consisted of three disti nct models . Modeling I corresponded to the theoretical development and valid ation of a system of indicators . Modeling II concern s the development and valida tion of multi - indicator system . For this modeling, we used the Mother of Use and Importance and Multivariate Analysis . Modeling III correspo nded to the validation situated , which consisted of a case study of a wo rk of construct ion of buildings , which were applied and anal yzed the results of modeling II . This work resulted in the development of an applied and tested for the construction of an integrated system of per formance indicators methodology , involving aspects of production , quality , e nvironmental, health and safety . It is inferred that the SIDECC can be applied, in full or in part , the construction companies as a whole, as we ll as in other economic sectors .
Resumo:
“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
The purpose of this study is to explore the link between decentralization and the impact of natural disasters through empirical analysis. It addresses the issue of the importance of the role of local government in disaster response through different means of decentralization. By studying data available for 50 countries, it allows to develop the knowledge on the role of national government in setting policy that allows flexibility and decision making at a local level and how this devolution of power influences the outcome of disasters. The study uses Aaron Schneider’s definition and rankings of decentralization, the EM-DAT database to identify the amount of people affected by disasters on average per year as well as World Bank Indicators and the Human Development Index (HDI) to model the role of local decentralization in mitigating disasters. With a multivariate regression it looks at the amount of affected people as explained by fiscal, administrative and political decentralization, government expenses, percentage of urbanization, total population, population density, the HDI and the overall Logistics Performance Indicator (LPI). The main results are that total population, the overall LPI and fiscal decentralization are all significant in relation to the amount of people affected by disasters for the countries and period studied. These findings have implication for government’s policies by indicating that fiscal decentralization by allowing local governments to control a bigger proportion of the countries revenues and expenditures plays a role in reducing the amount of affected people in disasters. This can be explained by the fact that local government understand their own needs better in both disaster prevention and response which helps in taking the proper decisions to mitigate the amount of people affected in a disaster. The reduction in the implication of national government might also play a role in reducing the time of reaction to face a disaster. The main conclusion of this study is that fiscal control by local governments can help reduce the amount of people affected by disasters.
Resumo:
El objetivo de este trabajo fue analizar las diferencias en el aprendizaje adquirido por alumnos que aprenden bajo dos metodologías de enseñanza-aprendizaje diferentes. La muestra estuvo conformada por 47 escolares, 38.3% chicos y el 61.7% chicas, distribuidos en dos grupos escolares de quinto de Educación Primaria, a los que se aplicó un programa de intervención bajo una metodología comprensiva (n=24) y otro bajo una metodología tradicional o técnica (n=27). Los programas de intervención fueron validados por un panel de expertos. La codificación de las variables y el cálculo de los indicadores de aprendizaje se realizó con el instrumento para la medición de los aprendizajes y rendimiento en baloncesto. El análisis de confiabilidad inter-observadores a través del Multirater κfree fue óptimo (κ≥.80). Se realizó un análisis descriptivo de los indicadores de aprendizaje, un ANOVA para identificar las diferencias entre programas y se exploraron los efectos reales de las puntuaciones a través del cálculo de las inferencias basadas en la magnitud. Los resultados muestran la mejora del alumnado que recibió el programa bajo una metodología alternativa, observándose mayores aprendizajes tanto en los indicadores de rendimiento de toma de decisión (p≤ .01), de eficacia (p≤ .05), e indicador de rendimiento total (p≤ .05).
Resumo:
IS/IT investments are seen has having an enormous potential impact on the competitive position of the firm, on its performance, and demand an active and motivated participation of several stakeholder groups. The shortfall of evidence concerning the productivity of IT became known as the ‘productivity paradox’. As Robert Solow, the Nobel laureate economist stated “we see computers everywhere except in the productivity statistics”. An important stream of research conducted all over the world has tried to understand these phenomena, called in the literature as «IS business value» field. However, there is a gap in the literature, addressing the Portuguese situation. No empirical work has been done to date in order to understand the impact of Information Technology adoption on the productivity of those firms. Using data from two surveys conducted by the Portuguese National Institute of Statistics (INE), Inquiry to the use of IT by Portuguese companies (IUTIC) and the Inquiry Harmonized to (Portuguese) companies (accounting data), this study relates (using regression analysis) the amounts spent on IT with the financial performance indicator Returns on Equity, as a proxy of firm productivity, of Portuguese companies with more than 250 employees. The aim of this paper is to shed light on the Portuguese situation concerning the impact of IS/IT on the productivity of Portuguese top companies. Empirically, we test the impact of IT expenditure on firm productivity of a sample of Portuguese large companies. Our results, based on firm-level data on Information Technology expenditure and firm productivity as measured by return on equity (1186 observations) for the years of 2003 and 2004, exhibit a negative impact of IT expenditure on firm productivity, in line with “productivity paradox” claimants.
Resumo:
Frequency-domain scheduling and rate adaptation have helped next generation orthogonal frequency division multiple access (OFDMA) based wireless cellular systems such as Long Term Evolution (LTE) achieve significantly higher spectral efficiencies. To overcome the severe uplink feedback bandwidth constraints, LTE uses several techniques to reduce the feedback required by a frequency-domain scheduler about the channel state information of all subcarriers of all users. In this paper, we analyze the throughput achieved by the User Selected Subband feedback scheme of LTE. In it, a user feeds back only the indices of the best M subbands and a single 4-bit estimate of the average rate achievable over all selected M subbands. In addition, we compare the performance with the subband-level feedback scheme of LTE, and highlight the role of the scheduler by comparing the performances of the unfair greedy scheduler and the proportional fair (PF) scheduler. Our analysis sheds several insights into the working of the feedback reduction techniques used in LTE.
Resumo:
Frequency-domain scheduling and rate adaptation enable next-generation orthogonal frequency-division multiple access (OFDMA) cellular systems such as Long-Term Evolution (LTE) to achieve significantly higher spectral efficiencies. LTE uses a pragmatic combination of several techniques to reduce the channel-state feedback that is required by a frequency-domain scheduler. In the subband-level feedback and user-selected subband feedback schemes specified in LTE, the user reduces feedback by reporting only the channel quality that is averaged over groups of resource blocks called subbands. This approach leads to an occasional incorrect determination of rate by the scheduler for some resource blocks. In this paper, we develop closed-form expressions for the throughput achieved by the feedback schemes of LTE. The analysis quantifies the joint effects of three critical components on the overall system throughput-scheduler, multiple-antenna mode, and the feedback scheme-and brings out its dependence on system parameters such as the number of resource blocks per subband and the rate adaptation thresholds. The effect of the coarse subband-level frequency granularity of feedback is captured. The analysis provides an independent theoretical reference and a quick system parameter optimization tool to an LTE system designer and theoretically helps in understanding the behavior of OFDMA feedback reduction techniques when operated under practical system constraints.
Resumo:
Frequency-domain scheduling and rate adaptation enable next generation wireless cellular systems such as Long Term Evolution (LTE) to achieve significantly higher downlink throughput. LTE assigns subcarriers in chunks, called physical resource blocks (PRBs), to users to reduce control signaling overhead. To reduce the enormous feedback overhead, the channel quality indicator (CQI) report that is used to feed back channel state information is averaged over a subband, which, in turn, is a group of multiple PRBs. In this paper, we develop closed-form expressions for the throughput achieved by the subband-level CQI feedback mechanism of LTE. We show that the coarse frequency resolution of the CQI incurs a significant loss in throughput and limits the multi-user gains achievable by the system. We then show that the performance can be improved by means of an offset mechanism that effectively makes the users more conservative in reporting their CQI.
Resumo:
This paper deals with the energy consumption and the evaluation of the performance of air supply systems for a ventilated room involving high- and low-level supplies. The energy performance assessment is based on the airflow rate, which is related to the fan power consumption by achieving the same environmental quality performance for each case. Four different ventilation systems are considered: wall displacement ventilation, confluent jets ventilation, impinging jet ventilation and a high level mixing ventilation system. The ventilation performance of these systems will be examined by means of achieving the same Air Distribution Index (ADI) for different cases. The widely used high-level supplies require much more fan power than those for low-level supplies for achieving the same value of ADI. In addition, the supply velocity, hence the supply dynamic pressure, for a high-level supply is much larger than for low-level supplies. This further increases the power consumption for high-level supply systems. The paper considers these factors and attempts to provide some guidelines on the difference in the energy consumption associated with high and low level air supply systems. This will be useful information for designers and to the authors' knowledge there is a lack of information available in the literature on this area of room air distribution. The energy performance of the above-mentioned ventilation systems has been evaluated on the basis of the fan power consumed which is related to the airflow rate required to provide equivalent indoor environment. The Air Distribution Index (ADI) is used to evaluate the indoor environment produced in the room by the ventilation strategy being used. The results reveal that mixing ventilation requires the highest fan power and the confluent jets ventilation needs the lowest fan power in order to achieve nearly the same value of ADI.
Resumo:
There is increasing recognition that agricultural landscapes meet multiple societal needs and demands beyond provision of economic and environmental goods and services. Accordingly, there have been significant calls for the inclusion of societal, amenity and cultural values in agri-environmental landscape indicators to assist policy makers in monitoring the wider impacts of land-based policies. However, capturing the amenity and cultural values that rural agrarian areas provide, by use of such indicators, presents significant challenges. The EU social awareness of landscape indicator represents a new class of generalized social indicator using a top-down methodology to capture the social dimensions of landscape without reference to the specific structural and cultural characteristics of individual landscapes. This paper reviews this indicator in the context of existing agri-environmental indicators and their differing design concepts. Using a stakeholder consultation approach in five case study regions, the potential and limitations of the indicator are evaluated, with a particular focus on its perceived meaning, utility and performance in the context of different user groups and at different geographical scales. This analysis supplements previous EU-wide assessments, through regional scale assessment of the limitations and potentialities of the indicator and the need for further data collection. The evaluation finds that the perceived meaning of the indicator does not vary with scale, but in common with all mapped indicators, the usefulness of the indicator, to different user groups, does change with scale of presentation. This indicator is viewed as most useful when presented at the scale of governance at which end users operate. The relevance of the different sub-components of the indicator are also found to vary across regions.
Resumo:
Long-haul drivers work in irregular schedules due to load delivery demands. In general, driving and sleeping occur at irregular times and, consequently, partial sleep deprivation and/or circadian misalignment may emerge and result in sleepiness at the wheel. In this way, the aim of this study was to verify changes in the postural control parameters of professional drivers after one-night working. Eight male truck drivers working at night - night drivers (ND) and nine day drivers (DD) volunteered to participate in this study. The night drivers' postural stability was assessed immediately before and after an approximately 430 km journey by two identical force platforms at departure and arrival sites. The DD group was measured before and after a day's work. An interaction effect of time of day and type of shift in both conditions: eyes open (p < 0.01) and eyes closed (p < 0.001) for amplitude of mediolateral movements was observed. Postural stability, measured by force platform, is affected by a night of work, suggesting that it could be an effect of circadian and homeostatic influences over postural control.