891 resultados para Problem of evil
Resumo:
The G20 Finance Ministers have the opportunity this weekend to endorse the initial recommendations of the OECD on how to address the global problem of multinational tax avoidance. The work of the OECD on the issue to date is substantial. Most notable is the adoption by many nations, including Australia, of the Common Reporting Standard for the automatic exchange of tax information. This standard will allow significant inroads to be made into tax avoidance, particularly by individuals sheltering money offshore. This is the first step in an ambitious tax reform program. There is a long way to go if we are to end the issue now known as Base Erosion and Profit Shifting (BEPS). This week’s release of the first of the OECD recommendations contains some positive signs that further advances will be made. It also recognises some hard truths.
Resumo:
A mixed boundary value problem associated with the diffusion equation, that involves the physical problem of cooling of an infinite parallel-sided composite slab, is solved completely by using the Wiener-Hopf technique. An analytical expression is derived for the sputtering temperature at the quench front being created by a cold fluid moving on the upper surface of the slab at a constant speed v. The dependence of the various configurational parameters of the problem under consideration, on the sputtering temperature, is rather complicated and representative tables of numerical values of this important physical quantity are prepared for certain typical values of these parameters. Asymptotic results in their most simplified forms are also obtained when (i) the ratio of the thicknesses of the two materials comprising the slab is very much smaller than unity, and (ii) the quench-front speed v is very large, keeping the other parameters fixed, in both the cases.
Resumo:
The problem of excitation of 11zultilayercd-graded-dielectric-coatedc onductor by a magnetic ring source is fornzulated in the ,form of a contour integrul which is rolved by using the method of steepest descent. Numerical evaluation of launching efiiency shows that high value of about 90 percent can be attained by choosing proper dimensions of the launcher with respect to the dimension of the surface wave line.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.
Resumo:
A direct transform technique is found to be most suitable for attacking two-dimensional diffraction problems. As a first example of the application of the technique, the well-known Sommerfeld problem is reconsidered and the solution of the problem of diffraction, by a half-plane, of a cylindrical pulse is made use of in deducing the solution of the problem of diffraction of a plane wave by a soft half-plane. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
In this paper, non-linear programming techniques are applied to the problem of controlling the vibration pattern of a stretched string. First, the problem of finding the magnitudes of two control forces applied at two points l1 and l2 on the string to reduce the energy of vibration over the interval (l1, l2) relative to the energy outside the interval (l1, l2) is considered. For this problem the relative merits of various methods of non-linear programming are compared. The more complicated problem of finding the positions and magnitudes of two control forces to obtain the desired energy pattern is then solved by using the slack unconstrained minimization technique with the Fletcher-Powell search. In the discussion of the results it is shown that the position of the control force is very important in controlling the energy pattern of the string.
Resumo:
We are addressing the novel problem of jointly evaluating multiple speech patterns for automatic speech recognition and training. We propose solutions based on both the non-parametric dynamic time warping (DTW) algorithm, and the parametric hidden Markov model (HMM). We show that a hybrid approach is quite effective for the application of noisy speech recognition. We extend the concept to HMM training wherein some patterns may be noisy or distorted. Utilizing the concept of ``virtual pattern'' developed for joint evaluation, we propose selective iterative training of HMMs. Evaluating these algorithms for burst/transient noisy speech and isolated word recognition, significant improvement in recognition accuracy is obtained using the new algorithms over those which do not utilize the joint evaluation strategy.
Resumo:
There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.
Resumo:
We consider the problem of transmission of correlated discrete alphabet sources over a Gaussian Multiple Access Channel (GMAC). A distributed bit-to-Gaussian mapping is proposed which yields jointly Gaussian codewords. This can guarantee lossless transmission or lossy transmission with given distortions, if possible. The technique can be extended to the system with side information at the encoders and decoder.
Resumo:
We study the problem of the coalescence of twisted flux tubes by assuming that the azimuthal field lines reconnect at a current sheet during the coalescence process and everywhere else the magnetic field is frozen in the fluid. We derive relations connecting the topology of the coalesced flux tube with the topologies of the initial flux tubes, and then obtain a structure equation for calculating the field configuration of the coalesced flux tube from the given topology. Some solutions for the two extreme cases of low-β plasma and high-β plasma are discussed. The coalesced flux tube has less twist than the initial flux tube. Magnetic helicity is found to be exactly conserved during the coalescence, but the assumptions in the model put a constraint on the energy dissipation so that we do not get a relaxation to the minimum-energy Taylor state in the low-β case. It is pointed out that the structure equation connecting the topology and the equilibrium configuration is quite general and can be of use in many two-dimensional flux tube problems.
Resumo:
Choudhuri and Gilman (1987) considered certain implications of the hypothesis that the magnetic flux within the Sun is generated at the bottom of the convection zone and then rises through it. Taking flux rings symmetric around the rotation axis and using reasonable values of different parameters, they found that the Coriolis force deflects these flux rings into trajectories parallel to the rotation axis so that they emerge at rather high latitudes. This paper looks into the question of whether the action of the Coriolis force is subdued when the initial configuration of the flux ring has non-axisymmetries in the form of loop structures. The results depend dramatically on whether the flux ring with the loops lies completely within the convection zone or whether the lower parts of it are embedded in the stable layers underneath the convection zone. In the first case, the Coriolis force supresses the non-axisymmetric perturbations so that the flux ring tends to remain symmetric and the trajectories are very similar to those of Choudhuri and Gilman (1987). In the second case, however, the lower parts of the flux ring may remain anchored underneath the bottom of the convection zone, but the upper parts of the loops still tend to move parallel to the rotation axis and emerge at high latitudes. Thus the problem of the magnetic flux not being able to come out at the sunspot latitudes still persists after the non-axisymmetries in the flux rings are taken into account.
Resumo:
The integral diaphragm pressure transducer consists of a diaphragm machined from precipitation hardened martensitic (APX4) steel. Its performance is quite significant as it depends upon various factors such as mechanical properties including induced residual stress levels, metallurgical and physical parameters due to different stages of processing involved. Hence, the measurement and analysis of residual stress becomes very important from the point of in-service assessment of a component. In the present work, the stress measurements have been done using the X-ray diffraction (XRD) technique, which is a non-destructive test (NDT). This method is more reliable and widely used compared to the other NDT techniques. The metallurgical aspects have been studied by adopting the conventional metallographic practices including examination of microstructure using light microscope. The dimensional measurements have been carried out using dimensional gauge. The results of the present investigation reveals that the diaphragm material after undergoing series of realization processes has yielded good amount of retained austenite in it. Also, the presence of higher compressive stresses induced in the transducer results in non-linearity, zero shift and dimensional instability. The problem of higher retained austenite content and higher compressive stress have been overcome by adopting a new realization process involving machining and cold and hot stabilization soak which has brought down the retained austenite content to about 5–6% and acceptable level of compressive stress in the range −100 to −150 MPa with fine tempered martensitic phase structure and good dimensional stability. The new realization process seems to be quite effective in terms of controlling retained austenite content, residual stress, metallurgical phase as well as dimensional stability and this may result in minimum zero shift of the diaphragm system.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Recent research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.