961 resultados para Scheduled Commercial Banking system
Resumo:
To investigate the technical feasibility of a novel cooling system for commercial greenhouses, knowledge of the state of the art in greenhouse cooling is required. An extensive literature review was carried out that highlighted the physical processes of greenhouse cooling and showed the limitations of the conventional technology. The proposed cooling system utilises liquid desiccant technology; hence knowledge of liquid desiccant cooling is also a prerequisite before designing such a system. Extensive literature reviews on solar liquid desiccant regenerators and desiccators, which are essential parts of liquid desiccant cooling systems, were carried out to identify their advantages and disadvantages. In response to the findings, a regenerator and a desiccator were designed and constructed in lab. An important factor of liquid desiccant cooling is the choice of liquid desiccant itself. The hygroscopicity of the liquid desiccant affects the performance of the system. Bitterns, which are magnesium-rich brines derived from seawater, are proposed as an alternative liquid desiccant for cooling greenhouses. A thorough experimental and theoretical study was carried out in order to determine the properties of concentrated bitterns. It was concluded that their properties resemble pure magnesium chloride solutions. Therefore, magnesium chloride solution was used in laboratory experiments to assess the performance of the regenerator and the desiccator. To predict the whole system performance, the physical processes of heat and mass transfer were modelled using gPROMS® advanced process modelling software. The model was validated against the experimental results. Consequently it was used to model a commercials-scale greenhouse in several hot coastal areas in the tropics and sub-tropics. These case studies show that the system, when compared to evaporative cooling, achieves 3oC-5.6oC temperature drop inside the greenhouse in hot and humid places (RH>70%) and 2oC-4oC temperature drop in hot and dry places (50%
Resumo:
Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.
Resumo:
We describe the results of in-vivo trials of a portable fiber Bragg grating based temperature profile monitoring system. The probe incorporates five Bragg gratings along a single fiber and prevents the gratings from being strained. Illumination is provided by a superluminescent diode, and a miniature CCD based spectrometer is used for demultiplexing. The CCD signal is read into a portable computer through a small A/D interface; the computer then calculates the positions of the center wavelengths of the Bragg gratings, providing a resolution of 0.2 °C. Tests were carried out on rabbits undergoing hyperthermia treatment of the kidney and liver via inductive heating of metallic implants and comparison was made with a commercial Fluoroptic thermometry system.
Resumo:
Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.
Resumo:
Modern injection-moulding machinery which produces several, pairs of plastic footwear at a time brought increased production planning problems to a factory. The demand for its footwear is seasonal but the company's manning policy keeps a fairly constant production level thus determining the aggregate stock. Production planning must therefore be done within the limitations of a specified total stock. The thesis proposes a new production planning system with four subsystems. These are sales forecasting, resource planning, and two levels of production scheduling: (a) aggregate decisions concerning the 'manufacturing group' (group of products) to be produced in each machine each week, and (b) detailed decisions concerning the products within a manufacturing group to be scheduled into each mould-place. The detailed scheduling is least dependent on improvements elsewhere so the sub-systems were tackled in reverse order. The thesis concentrates on the production scheduling sub-systems which will provide most. of the benefits. The aggregate scheduling solution depends principally on the aggregate stocks of each manufacturing group and their division into 'safety stocks' (to prevent shortages) and 'freestocks' (to permit batch production). The problem is too complex for exact solution but a good heuristic solution, which has yet to be implemented, is provided by minimising graphically immediate plus expected future costs. The detailed problem splits into determining the optimal safety stocks and batch quantities given the appropriate aggregate stocks. It.is found that the optimal safety stocks are proportional to the demand. The ideal batch quantities are based on a modified, formula for the Economic Batch Quantity and the product schedule is created week by week using a priority system which schedules to minimise expected future costs. This algorithm performs almost optimally. The detailed scheduling solution was implemented and achieved the target savings for the whole project in favourable circumstances. Future plans include full implementation.
Resumo:
Off-highway motive plant equipment is costly in capital outlay and maintenance. To reduce these overheads and increase site safety and workrate, a technique of assessing and limiting the velocity of such equipment is required. Due to the extreme environmental conditions met on such sites, conventional velocity measurement techniques are inappropriate. Ogden Electronics Limited were formed specifically to manufacture a motive plant safety system incorporating a speed sensor and sanction unit; to date, the only such commercial unit available. However, problems plague the reliability, accuracy and mass production of this unit. This project assesses the company's exisiting product, and in conjunction with an appreciation of the company history and structure, concludes that this unit is unsuited to its intended application. Means of improving the measurement accuracy and longevity of this unit, commensurate with the company's limited resources and experience, are proposed, both for immediate retrofit and for longer term use. This information is presented in the form of a number of internal reports for the company. The off-highway environment is examined; and in conjunction with an evaluation of means of obtaining a returned signal, comparisons of processing techniques, and on-site gathering of previously unavailable data, preliminary designs for an alternative product are drafted. Theoretical aspects are covered by a literature review of ground-pointing radar, vehicular radar, and velocity measuring systems. This review establishes and collates the body of knowledge in areas previously considered unrelated. Based upon this work, a new design is proposed which is suitable for incorporation into the existing company product range. Following production engineering of the design, five units were constructed, tested and evaluated on-site. After extended field trials, this design has shown itself to possess greater accuracy, reliability and versatility than the existing sensor, at a lower unit cost.
Resumo:
In recent years, freshwater fish farmers have come under increasing pressure from the Water Authorities to control the quality of their farm effluents. This project aimed to investigate methods of treating aquacultural effluent in an efficient and cost-effective manner, and to incorporate the knowledge gained into an Expert System which could then be used in an advice service to farmers. From the results of this research it was established that sedimentation and the use of low pollution diets are the only cost effective methods of controlling the quality of fish farm effluents. Settlement has been extensively investigated and it was found that the removal of suspended solids in a settlement pond is only likely to be effective if the inlet solids concentration is in excess of 8 mg/litre. The probability of good settlement can be enhanced by keeping the ratio of length/retention time (a form of mean fluid velocity) below 4.0 metres/minute. The removal of BOD requires inlet solids concentrations in excess of 20 mg/litre to be effective, and this is seldom attained on commercial fish farms. Settlement, generally, does not remove appreciable quantities of ammonia from effluents, but algae can absorb ammonia by nutrient uptake under certain conditions. The use of low pollution, high performance diets gives pollutant yields which are low when compared with published figures obtained by many previous workers. Two Expert Systems were constructed, both of which diagnose possible causes of poor effluent quality on fish farms and suggest solutions. The first system uses knowledge gained from a literature review and the second employs the knowledge obtained from this project's experimental work. Consent details for over 100 fish farms were obtained from the public registers kept by the Water Authorities. Large variations in policy from one Authority to the next were found. These data have been compiled in a computer file for ease of comparison.
Resumo:
Initially this thesis examines the various mechanisms by which technology is acquired within anodizing plants. In so doing the history of the evolution of anodizing technology is recorded, with particular reference to the growth of major markets and to the contribution of the marketing efforts of the aluminium industry. The business economics of various types of anodizing plants are analyzed. Consideration is also given to the impact of developments in anodizing technology on production economics and market growth. The economic costs associated with work rejected for process defects are considered. Recent changes in the industry have created conditions whereby information technology has a potentially important role to play in retaining existing knowledge. One such contribution is exemplified by the expert system which has been developed for the identification of anodizing process defects. Instead of using a "rule-based" expert system, a commercial neural networks program has been adapted for the task. The advantages of neural networks over 'rule-based' systems is that they are better suited to production problems, since the actual conditions prevailing when the defect was produced are often not known with certainty. In using the expert system, the user first identifies the process stage at which the defect probably occurred and is then directed to a file enabling the actual defects to be identified. After making this identification, the user can consult a database which gives a more detailed description of the defect, advises on remedial action and provides a bibliography of papers relating to the defect. The database uses a proprietary hypertext program, which also provides rapid cross-referencing to similar types of defect. Additionally, a graphics file can be accessed which (where appropriate) will display a graphic of the defect on screen. A total of 117 defects are included, together with 221 literature references, supplemented by 48 cross-reference hyperlinks. The main text of the thesis contains 179 literature references. (DX186565)
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
This article examines cost economies, productivity growth and cost efficiency of the Chinese banks using a unique panel dataset that identifies banks' four outputs and four input prices over the period of 1995-2001. By assessing the appropriateness of model specification, and making use of alternative methodologies in evaluating the performance of banks, we find that the joint-stock commercial banks outperform state-owned commercial banks in productivity growth and cost efficiency. Under the variable cost assumption, Chinese banks display economies of scale, with state-owned commercial banks enjoying cost advantages over the joint-stock commercial banks. Consequently, our results highlight the ownership advantage of these two types of banks and generally support the ongoing banking reform and transformation that is currently taking place in China.
Resumo:
The role of interest and agency in the creation and transformation of institutions, in particular the “paradox of embedded agency” (Seo & Creed, 2002) have long puzzled institutional scholars. Most recently, Lawrence and Suddaby (2006) coined the term “institutional work” to describe various strategies for creating, maintaining and disrupting institutions. This label, while useful to integrate existing research, highlights institutionalists’ lack of attention to work as actors’ everyday occupational tasks and activities. Thus, the objective of this study is to take institutional work literally and ask: How does practical work come to constitute institutional work? Drawing on concepts of “situated change” (Orlikowski, 1996) I supplement existing macro-level perspectives of change with a microscopic, practice-based alternative. I examine the everyday work of English and German banking lawyers in a global law firm. Located at the intersection of local laws, international financial markets, commercial logics and professional norms, banking lawyers’ work regularly bridges different normative settings. Hence, they must constructively negotiate contradictory meanings, practices and logics to develop shared routines that resonate with different normative frameworks and facilitate task accomplishment. Based on observation and interview data, the paper distils a process model of banking transac-tions that highlights the critical interfaces forcing English and German banking lawyers into cross-border sensemaking. It distinguishes two accounts of cross-border sensemaking: the “old story” in which contradictory practices and norms collide and the “new story” of a synthetic set of practices for collaboratively “editing” (Sahlin-Andersson, 1996) legal documentation. Data show how new practices gain shape and legitimacy over a series of dialectic contests unfolding at work and how, in turn, these contests shift institutional logics as lawyers ‘get the deal done’. These micro-mechanisms suggest that as practical and institutional work blend, everyday work-ing practices come to constitute a form of institutional agency that is situated, emergent, dialectic and, therefore, embedded.
Resumo:
This paper analyses the efficiency of Malaysian commercial banks between 1996 and 2002 and finds that while the East Asian financial crisis caused a short-term increase in efficiency in 1998 primarily due to cost-cutting, increases in non-performing loans after the crisis caused a more sustained decline in bank efficiency. It is also found that mergers, fully Islamic banks, and conventional banks operating Islamic banking windows are all associated with lower efficiency. The paper estimates suggest mild decreasing returns to scale, and an average productivity change of 2.37% that is primarily attributable to technical change, which has nonetheless declined over time. Finally, while Islamic banks have been moderately successful in developing new products and technologies, the results suggest that the potential for Islamic banks to overcome their relative inefficiency is limited.
Resumo:
Research Question/Issue: In this paper, we empirically investigate whether US listed commercial banks with effective corporate governance structures engage in higher levels of conservative financial accounting and reporting. Research Findings/Insights: Using both market- and accrual-based measures of conservatism and both composite and disaggregated governance indices, we document convincing evidence that well-governed banks engage in significantly higher levels of conditional conservatism in their financial reporting practices. For example, we find that banks with effective governance structures, particularly those with effective board and audit governance structures, recognize loan loss provisions that are larger relative to changes in nonperforming loans compared to their counterparts with ineffective governance structures. Theoretical/Academic Implications: We contribute to the extant literature on the relationship between corporate governance and quality of accounting information by providing evidence that banks with effective governance structures practice higher levels of accounting conservatism. Practitioner/Policy Implications: The findings of this study would be useful to US bank regulators/supervisors in improving the existing regulatory framework by focusing on accounting conservatism as a complement to corporate governance in mitigating the opaqueness and intense information asymmetry that plague banks.
Resumo:
Background/aim: The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim af this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). Method: The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autarefractian (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. Results: On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference -0.05 (0.63) D, p = 0.41) and more negative than the SRW-5000 prescription (by -0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of -8.75D to +4.00D (mean spherical equivalent) examined. Conclusion: The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types.
Resumo:
Power system simulation software is a useful tool for teaching the fundamentals of power system design and operation. However, existing commercial packages are not ideal for teaching work-based students because of high-cost, complexity of the software and licensing restrictions. This paper describes a set of power systems libraries that have been developed for use with the free, student-edition of a Micro-Cap Spice that overcomes these problems. In addition, these libraries are easily adapted to include power electronic converter based components into the simulation, such as HVDC, FACTS and smart-grid devices, as well as advanced system control functions. These types of technology are set to become more widespread throughout existing power networks, and their inclusion into a power engineering degree course is therefore becoming increasingly important.