903 resultados para operational modal analysis
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
As student numbers in higher education in the UK have expanded during recent years, it has become increasingly important to understand its cost structure. This study applies Data Envelopment Analysis (DEA) to higher education institutions in England to assess their cost structure, efficiency and productivity. The paper complements an earlier study that used parametric methods to analyse the same panel data. Interestingly, DEA provides estimates of subject-specific unit costs that are in the same ballpark as those provided by the parametric methods. The paper then extends the previous analysis and finds that further student number increases of the order of 20–27% are feasible through exploiting operating and scale efficiency gains and also adjusting student mix. Finally the paper uses a Malmquist index approach to assess productivity change in the UK higher education. The results reveal that for a majority of institutions productivity has actually decreased during the study period.
Resumo:
The main purpose of this research is to develop and deploy an analytical framework for measuring the environmental performance of manufacturing supply chains. This work's theoretical bases combine and reconcile three major areas: supply chain management, environmental management and performance measurement. Researchers have suggested many empirical criteria for green supply chain (GSC) performance measurement and proposed both qualitative and quantitative frameworks. However, these are mainly operational in nature and specific to the focal company. This research develops an innovative GSC performance measurement framework by integrating supply chain processes (supplier relationship management, internal supply chain management and customer relationship management) with organisational decision levels (both strategic and operational). Environmental planning, environmental auditing, management commitment, environmental performance, economic performance and operational performance are the key level constructs. The proposed framework is then applied to three selected manufacturing organisations in the UK. Their GSC performance is measured and benchmarked by using the analytic hierarchy process (AHP), a multiple-attribute decision-making technique. The AHP-based framework offers an effective way to measure and benchmark organisations’ GSC performance. This study has both theoretical and practical implications. Theoretically it contributes holistic constructs for designing a GSC and managing it for sustainability; and practically it helps industry practitioners to measure and improve the environmental performance of their supply chain. © 2013 Copyright Taylor and Francis Group, LLC. CORRIGENDUM DOI 10.1080/09537287.2012.751186 In the article ‘Green supply chain performance measurement using the analytic hierarchy process: a comparative analysis of manufacturing organisations’ by Prasanta Kumar Dey and Walid Cheffi, Production Planning & Control, 10.1080/09537287.2012.666859, a third author is added which was not included in the paper as it originally appeared. The third author is Breno Nunes.
Resumo:
The aim of this paper is to illustrate the measurement of productive efficiency using Nerlovian indicator and metafrontier with data envelopment analysis techniques. Further, we illustrate how profit efficiency of firms operating in different regions can be aggregated into one overarching frontier. Sugarcane production in three regions in Kenya has been used to illustrate these concepts. Results show that the sources of inefficiency in all regions are both technical and allocative, but allocative efficiency contributes more to the overall Nerlovian (in)efficiency indicator. © 2011 Springer-Verlag.
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.
Resumo:
Maize is the main staple food for most Kenyan households, and it predominates where smallholder, as well as large-scale, farming takes place. In the sugarcane growing areas of Western Kenya, there is pressure on farmers on whether to grow food crops, or grow sugarcane, which is the main cash crop. Further, with small and diminishing land sizes, the question of productivity and efficiency, both for cash and food crops is of great importance. This paper, therefore, uses a two-step estimation technique (DEA meta-frontier and Tobit Regression) to highlight the inefficiencies in maize cultivation, and their causes in Western Kenya.
Resumo:
Although considerable effort has been invested in the measurement of banking efficiency using Data Envelopment Analysis, hardly any empirical research has focused on comparison of banks in Gulf States Countries This paper employs data on Gulf States banking sector for the period 2000-2002 to develop efficiency scores and rankings for both Islamic and conventional banks. We then investigate the productivity change using Malmquist Index and decompose the productivity into technical change and efficiency change. Further, hypothesis testing and statistical precision in the context of nonparametric efficiency and productivity measurement have been used. Specially, cross-country analysis of efficiency and comparisons of efficiencies between Islamic banks and conventional banks have been investigated using Mann-Whitney test.
Resumo:
In multicriteria decision problems many values must be assigned, such as the importance of the different criteria and the values of the alternatives with respect to subjective criteria. Since these assignments are approximate, it is very important to analyze the sensitivity of results when small modifications of the assignments are made. When solving a multicriteria decision problem, it is desirable to choose a decision function that leads to a solution as stable as possible. We propose here a method based on genetic programming that produces better decision functions than the commonly used ones. The theoretical expectations are validated by case studies. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Radio Frequency Identification (RFID) has been identified as a crucial technology for the modern 21st century knowledge-based economy. Some businesses have realised benefits of RFID adoption through improvements in operational efficiency, additional cost savings, and opportunities for higher revenues. RFID research in warehousing operations has been less prominent than in other application domains. To investigate how RFID technology has had an impact in warehousing, a comprehensive analysis of research findings available from articles through leading scientific article databases has been conducted. Articles from years 1995 to 2010 have been reviewed and analysed with respect to warehouse operations, RFID application domains, benefits achieved and obstacles encountered. Four discussion topics are presented covering RFID in warehousing focusing on its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and providing insights for researchers to establish new research agendas and for practitioners to consider and assess the adoption of RFID in warehousing functions. © 2013 Elsevier B.V.
Resumo:
Data envelopment analysis (DEA) has gained a wide range of applications in measuring comparative efficiency of decision making units (DMUs) with multiple incommensurate inputs and outputs. The standard DEA method requires that the status of all input and output variables be known exactly. However, in many real applications, the status of some measures is not clearly known as inputs or outputs. These measures are referred to as flexible measures. This paper proposes a flexible slacks-based measure (FSBM) of efficiency in which each flexible measure can play input role for some DMUs and output role for others to maximize the relative efficiency of the DMU under evaluation. Further, we will show that when an operational unit is efficient in a specific flexible measure, this measure can play both input and output roles for this unit. In this case, the optimal input/output designation for flexible measure is one that optimizes the efficiency of the artificial average unit. An application in assessing UK higher education institutions used to show the applicability of the proposed approach. © 2013 Elsevier Ltd. All rights reserved.