229 resultados para time dependant cost function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a maintenance optimisation method for a multi-state series-parallel system considering economic dependence and state-dependent inspection intervals. The objective function considered in the paper is the average revenue per unit time calculated based on the semi-regenerative theory and the universal generating function (UGF). A new algorithm using the stochastic ordering is also developed in this paper to reduce the search space of maintenance strategies and to enhance the efficiency of optimisation algorithms. A numerical simulation is presented in the study to evaluate the efficiency of the proposed maintenance strategy and optimisation algorithms. The simulation result reveals that maintenance strategies with opportunistic maintenance and state-dependent inspection intervals are more cost-effective when the influence of economic dependence and inspection cost is significant. The study further demonstrates that the optimisation algorithm proposed in this paper has higher computational efficiency than the commonly employed heuristic algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the effects of pre-cooling duration on performance and neuromuscular function for self-paced intermittent-sprint shuttle running in the heat. Eight male, team-sport athletes completed two 35-min bouts of intermittent-sprint shuttle running separated by a 15-min recovery on three separate occasions (33°C, 34% relative humidity). Mixed-method pre-cooling was completed for 20 min (COOL20), 10-min (COOL10) or no cooling (CONT) and reapplied for 5-min mid-exercise. Performance was assessed via sprint times, percentage decline and shuttle-running distance covered. Maximal voluntary contractions (MVC), voluntary activation (VA) and evoked twitch properties were recorded pre- and post-intervention and mid- and post-exercise. Core temperature (T c), skin temperature, heart rate, capillary blood metabolites, sweat losses, perceptual exertion and thermal stress were monitored throughout. Venous blood draws pre- and post-exercise were analyzed for muscle damage and inflammation markers. Shuttle-running distances covered were increased 5.2 ± 3.3% following COOL20 (P < 0.05), with no differences observed between COOL10 and CONT (P > 0.05). COOL20 aided in the maintenance of mid- and post-exercise MVC (P < 0.05; d > 0.80), despite no conditional differences in VA (P > 0.05). Pre-exercise T c was reduced by 0.15 ± 0.13°C with COOL20 (P < 0.05; d > 1.10), and remained lower throughout both COOL20 and COOL10 compared to CONT (P < 0.05; d > 0.80). Pre-cooling reduced sweat losses by 0.4 ± 0.3 kg (P < 0.02; d > 1.15), with COOL20 0.2 ± 0.4 kg less than COOL10 (P = 0.19; d = 1.01). Increased pre-cooling duration lowered physiological demands during exercise heat stress and facilitated the maintenance of self-paced intermittent-sprint performance in the heat. Importantly, the dose-response interaction of pre-cooling and sustained neuromuscular responses may explain the improved exercise performance in hot conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: The current study investigated the change in neuromuscular contractile properties following competitive rugby league matches and the relationship with physical match demands. Design: Eleven trained, male rugby league players participated in 2–3 amateur, competitive matches (n = 30). Methods: Prior to, immediately (within 15-min) and 2 h post-match, players performed repeated counter-movement jumps (CMJ) followed by isometric tests on the right knee extensors for maximal voluntary contraction (MVC), voluntary activation (VA) and evoked twitch contractile properties of peak twitch force (Pt), rate of torque development (RTD), contraction duration (CD) and relaxation rate (RR). During each match, players wore 1 Hz Global Positioning Satellite devices to record distance and speeds of matches. Further, matches were filmed and underwent notational analysis for number of total body collisions. Results: Total, high-intensity, very-high intensity distances covered and mean speed were 5585 ± 1078 m, 661 ± 265, 216 ± 121 m and 75 ± 14 m min−1, respectively. MVC was significantly reduced immediately and 2 h post-match by 8 ± 11 and 12 ± 13% from pre-match (p < 0.05). Moreover, twitch contractile properties indicated a suppression of Pt, RTD and RR immediately post-match (p < 0.05). However, VA was not significantly altered from pre-match (90 ± 9%), immediately-post (89 ± 9%) or 2 h post (89 ± 8%), (p > 0.05). Correlation analyses indicated that total playing time (r = −0.50) and mean speed (r = −0.40) were moderately associated to the change in post-match MVC, while mean speed (r = 0.35) was moderately associated to VA. Conclusions: The present study highlights the physical demands of competitive amateur rugby league result in interruption of peripheral contractile function, and post-match voluntary torque suppression may be associated with match playing time and mean speeds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New substation technology, such as non-conventional instrument transformers,and a need to reduce design and construction costs, are driving the adoption of Ethernet based digital process bus networks for high voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include GOOSE, SNMP and IEC 61850-9-2 sampled values. A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP based traffic segregation method. While this paper focusses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High volume sampled value data and time-critical circuit breaker tripping commands do not interact on a full duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high voltage switchyards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The decision of Lai v Soineva [2011] QSC 247 in relation to the operation of standard conditions in the Queensland REIQ contract highlights a very practical issue often overlooked in the heat of a transaction .The point is relatively simple. In this instance ,the case concerned the interpretation of the printed "Building and Pest Inspection Clause" but is of relevance to the printed "Finance Clause" in the same contract as the wording and principles are identical. It highlights the issue of knowing well what is in the standard contract and not making assumptions. The case also highlights the cost to a party of dithering in making an election in a time of the essence environment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new comprehensive planning methodology is proposed for implementing distribution network reinforcement. The load growth, voltage profile, distribution line loss, and reliability are considered in this procedure. A time-segmentation technique is employed to reduce the computational load. Options considered range from supporting the load growth using the traditional approach of upgrading the conventional equipment in the distribution network, through to the use of dispatchable distributed generators (DDG). The objective function is composed of the construction cost, loss cost and reliability cost. As constraints, the bus voltages and the feeder currents should be maintained within the standard level. The DDG output power should not be less than a ratio of its rated power because of efficiency. A hybrid optimization method, called modified discrete particle swarm optimization, is employed to solve this nonlinear and discrete optimization problem. A comparison is performed between the optimized solution based on planning of capacitors along with tap-changing transformer and line upgrading and when DDGs are included in the optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cold water immersion (CWI) is a popular recovery modality, but actual physiological responses to CWI after exercise in the heat have not been well documented. The purpose of this study was to examine effects of 20-min CWI (14 degrees C) on neuromuscular function, rectal (T(re)) and skin temperature (T(sk)), and femoral venous diameter after exercise in the heat. Ten well-trained male cyclists completed two bouts of exercise consisting of 90-min cycling at a constant power output (216+/-12W) followed by a 16.1km time trial (TT) in the heat (32 degrees C). Twenty-five minutes post-TT, participants were assigned to either CWI or control (CON) recovery conditions in a counterbalanced order. T(re) and T(sk) were recorded continuously, and maximal voluntary isometric contraction torque of the knee extensors (MVIC), MVIC with superimposed electrical stimulation (SMVIC), and femoral venous diameters were measured prior to exercise, 0, 45, and 90min post-TT. T(re) was significantly lower in CWI beginning 50min post-TT compared with CON, and T(sk) was significantly lower in CWI beginning 25min post-TT compared with CON. Decreases in MVIC, and SMVIC torque after the TT were significantly greater for CWI compared with CON; differences persisted 90min post-TT. Femoral vein diameter was approximately 9% smaller for CWI compared with CON at 45min post-TT. These results suggest that CWI decreases T(re), but has a negative effect on neuromuscular function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose In this study we examine neuroretinal function in five amblyopes, who had been shown in previous functional MRI (fMRI) studies to have compromised function of the lateral geniculate nucleus (LGN), to determine if the fMRI deficit in amblyopia may have its origin at the retinal level. Methods We used slow flash multifocal ERG (mfERG) and compared averaged five ring responses of the amblyopic and fellow eyes across a 35 deg field. Central responses were also assessed over a field which was about 6.3 deg in diameter. We measured central retinal thickness using optical coherence tomography. Central fields were measured using the MP1-Microperimeter which also assesses ocular fixation during perimetry. MfERG data were compared with fMRI results from a previous study. Results Amblyopic eyes had reduced response density amplitudes (first major negative to first positive (N1-P1) responses) for the central and paracentral retina (up to 18 deg diameter) but not for the mid-periphery (from 18 to 35 deg). Retinal thickness was within normal limits for all eyes, and not different between amblyopic and fellow eyes. Fixation was maintained within the central 4° more than 80% of the time by four of the five participants; fixation assessed using bivariate contour ellipse areas (BCEA) gave rankings similar to those of the MP-1 system. There was no significant relationship between BCEA and mfERG response for either amblyopic or fellow eye. There was no significant relationship between the central mfERG eye response difference and the selective blood oxygen level dependent (BOLD) LGN eye response difference previously seen in these participants. Conclusions Retinal responses in amblyopes can be reduced within the central field without an obvious anatomical basis. Additionally, this retinal deficit may not be the reason why the LGN BOLD (blood oxygen level dependent) responses are reduced for amblyopic eye stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper examines the impact of the introduction of no-fault divorce legislation in Australia. The approach used is rather novel, a hazard model of the divorce rate is estimated with the role of legislation captured via a time-varying covariate. The paper concludes that contrary to US empirical evidence, no-fault divorce legislation appears to have had a positive impact upon the divorce rate in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contemporary working environment is being rapidly reshaped by technological, industrial and political forces. Increased global competitiveness and an emphasis on productivity have led to the appearance of alternative methods of employment, such as part-time, casual and itinerant work, allowing greater flexibility. This allows for the development of a core permanent staff and the simultaneous utilisation of casual staff according to business needs. Flexible workers across industries are generally referred to as the non-standard workforce and full-time permanent workers as the standard workforce. Even though labour flexibility favours the employer, increased opportunity for flexible work has been embraced by women for many reasons, including the gender struggle for greater economic independence and social equality. Consequently, the largely female nursing industry, both nationally and internationally, has been caught up in this wave of change. This ageing workforce has been at the forefront of the push for flexibility with recent figures showing almost half the nursing workforce is employed in non-standard capacity. In part, this has allowed women to fulfil caring roles outside their work, to ease off nearing retirement and to supplement the family income. More significantly, however, flexibility has developed as an economic management initiative, as a strategy for cost constraint. The result has been the development of a dual workforce and as suggested by Pocock, Buchanan and Campbell (2004), associated deep-seated resentment and the marginalisation of part-time and casual workers by their full-time colleagues and managers. Additionally, as nursing currently faces serious recruitment and retention problems there is urgent need to understand the factors which are underlying present discontent in the nursing profession. There is an identified gap in nursing knowledge surrounding the issues relating to recruitment and retention. Communication involves speaking, listening, reading and writing and is an interactive process which is central to the lives of humans. Workplace communication refers to human interaction, information technology, and multimedia and print. It is the means to relationship building between workers, management, and their external environment and is critical to organisational effectiveness. Communication and language are integral to nursing performance (Hall, 2005), in twenty-four hour service however increasing fragmentation due to part-time and casual work in the nursing industry means that effective communication management has become increasingly difficult. More broadly it is known that disruption to communication systems impacts negatively on consumer outcomes. Because of this gap in understanding how nurses view their contemporary nursing world, an interpretative ethnographic study which progressed to a critical ethnographic study, based on the conceptual framework of constructionism and interpretativism was used. The study site was a division within an acute health care facility, and the relationship between increasing casualisation of the nursing workforce and the experiences of communication of standard and non-standard nurses was explored. For this study, full-time standard nurses were those employed to work in a specific unit for forty hours per week. Non-standard nurses were those employed part-time in specific units or those nurses employed to work as relief pool nurses for shift short falls where needed. Nurses employed by external agencies, but required to fill in for shifts at the facility were excluded from this research. This study involved an analysis of observational, interview and focus group data of standard and non-standard nurses within this facility. Three analytical findings - the organisation of nursing work; constructing the casual nurse as other; and the function of space, situate communication within a broader discussion about non-standard work and organisational culture. The study results suggest that a significant culture of marginalisation exists for nurses who work in a non-standard capacity and that this affects communication for nurses and has implications for the quality of patient care. The discussion draws on the seven elements of marginalisation described by Hall, Stephen and Melius (1994). The arguments propose that these elements underpin a culture which supports remnants of the historically gendered stereotype "the good nurse" and these cultural values contribute to practices and behaviour which marginalise all nurses, particularly those who work less than full-time. Gender inequality is argued to be at the heart of marginalising practices because of long standing subordination of nurses by the powerful medical profession, paralleling historical subordination of women in society. This has denied nurses adequate representation and voice in decision making. The new knowledge emanating from this study extends current knowledge of factors surrounding recruitment and retention and as such contributes to an understanding of the current and complex nursing environment.