974 resultados para Computing cost
Resumo:
Cane railway systems provide empty bins for harvesters to fill and full bins of cane for the factory to process. These operations need to be conducted in a timely fashion to minimise delays to harvesters and the factory and to minimise the cut-to-crush delay, while also minimising the cost of providing this service. A range of tools has been provided over the years to assist in this process. This paper reviews the objectives of the cane transport system and the tools available to achieve those objectives. The facilities within these tools to assist in the control of costs are highlighted.
Resumo:
The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.
Resumo:
Inter-enterprise collaboration has become essential for the success of enterprises. As competition increasingly takes place between supply chains and networks of enterprises, there is a strategic business need to participate in multiple collaborations simultaneously. Collaborations based on an open market of autonomous actors set special requirements for computing facilities supporting the setup and management of these business networks of enterprises. Currently, the safeguards against privacy threats in collaborations crossing organizational borders are both insufficient and incompatible to the open market. A broader understanding is needed of the architecture of defense structures, and privacy threats must be detected not only on the level of a private person or enterprise, but on the community and ecosystem levels as well. Control measures must be automated wherever possible in order to keep the cost and effort of collaboration management reasonable. This article contributes to the understanding of the modern inter-enterprise collaboration environment and privacy threats in it, and presents the automated control measures required to ensure that actors in inter-enterprise collaborations behave correctly to preserve privacy.
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.
Resumo:
The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
The phenomena of nonlinear I-V behavior and electrical switching find extensive applications in power control, information storage, oscillators, etc. The study of I-V characteristics and switching parameters is necessary for the proper application of switching materials and devices. In the present work, a simple low-cost electrical switching analyzer has been developed for the measurement of the electrical characteristics of switching materials and devices. The system developed consists of a microcontroller-based excitation source and a high-speed data acquisition system. The design details of the excitation source, its interface with the high-speed data acquisition system and personal computer, and the details of the application software developed for automated measurements are described. Typical I-V characteristics and switching curves obtained with the system developed are also presented to illustrate the capability of the instrument developed.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.
Resumo:
Lightweight grids for lead-acid battery grids have been prepared from acrylonitrile. butadiene styrene (ABS) copolymer followed by coating with lead. Subsequently, the grids have been electrochemically coated with a conductive and corrosion-resistant layer of polyaniline. These grids are about 75% lighter than those employed in conventional lead-acid batteries. Commercial-grade 6V/3.5 Ah (C-20-rate) lead-acid batteries have been assembled and characterized employing positive and negative plates constituting these grids. The specific energy of such a lead-acid battery is about 50 Wh/kg. The batteries can withstand fast charge-discharge duty cycles.
Resumo:
People in many countries are affected by fluorosis owing to the high levels of fluoride in drinking water. An inexpensive method for estimating the concentration of the fluoride ion in drinking water would be helpful in identifying safe sources of water and also in monitoring the performance of defluoridation techniques. For this purpose, a simple, inexpensive, and portable colorimeter has been developed in the present work. It is used in conjunction with the SPADNS method, which shows a color change in the visible region on addition of water containing fluoride to a reagent solution. Groundwater samples were collected from different parts of the state of Karnataka, India and analysed for fluoride. The results obtained using the colorimeter and the double beam spectrophotometer agreed fairly well. The costs of the colorimeter and of the chemicals required per test were about Rs. 250 (US$ 5) and Rs. 2.5 (US$ 0.05), respectively. In addition, the cost of the chemicals required for constructing the calibration curve was about Rs. 15 (US$ 0.3). (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A symmetric solution X satisfying the matrix equation XA = AtX is called a symmetrizer of the matrix A. A general algorithm to compute a matrix symmetrizer is obtained. A new multiple-modulus residue arithmetic called floating-point modular arithmetic is described and implemented on the algorithm to compute an error-free matrix symmetrizer.
Resumo:
A real or a complex symmetric matrix is defined here as an equivalent symmetric matrix for a real nonsymmetric matrix if both have the same eigenvalues. An equivalent symmetric matrix is useful in computing the eigenvalues of a real nonsymmetric matrix. A procedure to compute equivalent symmetric matrices and its mathematical foundation are presented.
Resumo:
In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The use of appropriate finite elements in different regions of a stressed solid can be expected to be economical in computing its stress response. This concept is exploited here in studying stresses near free edges in laminated coupons. The well known free edge problem of [0/90], symmetric laminate is considered to illustrate the application of the concept. The laminate is modelled as a combination of three distinct regions. Quasi-three-dimensional eight-noded quadrilateral isoparametric elements (Q3D8) are used at and near the free edge of the laminate and two-noded line elements (Q3D2) are used in the region away from the free edge. A transition element (Q3DT) provides a smooth inter-phase zone between the two regions. Significant reduction in the problem size and hence in the computational time and cost have been achieved at almost no loss of accuracy.
Resumo:
Synthesis of cost-optimal shell-and-tube heat exchangers is a difficult task since it involves a large number of parameters. An attempt is made in this article to simplify the process of choosing the parameter values that will minimize the cost of any heat exchanger satisfying a given heat duty and a particular set of constraints. The simplification is based on decoupling of the geometric and the thermal aspects of the problem. The concept of curves for cost-optimal design is introduced and is shown to simplify the synthesis process for shell-and-tube heat exchangers.