919 resultados para Scale Sensitive Loss Function
Resumo:
Hadron therapy is a promising technique to treat deep-seated tumors. For an accurate treatment planning, the energy deposition in the soft and hard human tissue must be well known. Water has been usually employed as a phantom of soft tissues, but other biomaterials, such as hydroxyapatite (HAp), used as bone substitute, are also relevant as a phantom for hard tissues. The stopping power of HAp for H+ and He+ beams has been studied experimentally and theoretically. The measurements have been done using the Rutherford backscattering technique in an energy range of 450-2000 keV for H+ and of 400-5000 keV for He+ projectiles. The theoretical calculations are based in the dielectric formulation together with the MELF-GOS (Mermin Energy-Loss Function – Generalized Oscillator Strengths) method [1] to describe the target excitation spectrum. A quite good agreement between the experimental data and the theoretical results has been found. The depth dose profile of H+ and He+ ion beams in HAp has been simulated by the SEICS (Simulation of Energetic Ions and Clusters through Solids) code [2], which incorporates the electronic stopping force due to the energy loss by collisions with the target electrons, including fluctuations due to the energy-loss straggling, the multiple elastic scattering with the target nuclei, with their corresponding nuclear energy loss, and the dynamical charge-exchange processes in the projectile charge state. The energy deposition by H+ and He+ as a function of the depth are compared, at several projectile energies, for HAp and liquid water, showing important differences.
Resumo:
The aim of this blinded, randomised, prospective clinical trial was to determine whether the addition of magnesium sulphate to spinally-administered ropivacaine would improve peri-operative analgesia without impairing motor function in dogs undergoing orthopaedic surgery. Twenty client-owned dogs undergoing tibial plateau levelling osteotomy were randomly assigned to one of two treatment groups: group C (control, receiving hyperbaric ropivacaine by the spinal route) or group M (magnesium, receiving a hyperbaric combination of magnesium sulphate and ropivacaine by the spinal route). During surgery, changes in physiological variables above baseline were used to evaluate nociception. Arterial blood was collected before and after spinal injection, at four time points, to monitor plasma magnesium concentrations. Post-operatively, pain was assessed with a modified Sammarco pain score, a Glasgow pain scale and a visual analogue scale, while motor function was evaluated with a modified Tarlov scale. Assessments were performed at recovery and 1, 2 and 3 h thereafter. Fentanyl and buprenorphine were administered as rescue analgesics in the intra- and post-operative periods, respectively. Plasma magnesium concentrations did not increase after spinal injection compared to baseline. Group M required less intra-operative fentanyl, had lower Glasgow pain scores and experienced analgesia of longer duration than group C (527.0 ± 341.0 min vs. 176.0 ± 109.0 min). However, in group M the motor block was significantly longer, which limits the usefulness of magnesium for spinal analgesia at the investigated dose. Further research is needed to determine a clinically effective dose with shorter duration of motor block for magnesium used as an additive to spinal analgesic agents.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the social loss function cannot serve as a direct loss function for the central bank. Accordingly, we employ implementation theory to design a central bank loss function (mechanism design) with consistent targets, while the social loss function serves as a social welfare criterion. That is, with the correct mechanism design for the central bank loss function, optimal policy and consistent policy become identical. In other words, optimal policy proves implementable (consistent).
Resumo:
This paper examines four equivalent methods of optimal monetary policymaking, committing to the social loss function, using discretion with the central bank long-run and short-run loss functions, and following monetary policy rules. All lead to optimal economic performance. The same performance emerges from these different policymaking methods because the central bank actually follows the same (similar) policy rules. These objectives (the social loss function, the central bank long-run and short-run loss functions) and monetary policy rules imply a complete regime for optimal policy making. The central bank long-run and short-run loss functions that produce the optimal policy with discretion differ from the social loss function. Moreover, the optimal policy rule emerges from the optimization of these different central bank loss functions.
Resumo:
We re-evaluate the Greenland mass balance for the recent period using low-pass Independent Component Analysis (ICA) post-processing of the Level-2 GRACE data (2002-2010) from different official providers (UTCSR, JPL, GFZ) and confirm the present important ice mass loss in the range of -70 and -90 Gt/y of this ice sheet, due to negative contributions of the glaciers on the east coast. We highlight the high interannual variability of mass variations of the Greenland Ice Sheet (GrIS), especially the recent deceleration of ice loss in 2009-2010, once seasonal cycles are robustly removed by Seasonal Trend Loess (STL) decomposition. Interannual variability leads to varying trend estimates depending on the considered time span. Correction of post-glacial rebound effects on ice mass trend estimates represents no more than 8 Gt/y over the whole ice sheet. We also investigate possible climatic causes that can explain these ice mass interannual variations, as strong correlations between GRACE-based mass balance and atmosphere/ocean parallels are established: (1) changes in snow accumulation, and (2) the influence of inputs of warm ocean water that periodically accelerate the calving of glaciers in coastal regions and, feed-back effects of coastal water cooling by fresh currents from glaciers melting. These results suggest that the Greenland mass balance is driven by coastal sea surface temperature at time scales shorter than accumulation.
Resumo:
We have studied the radial dependence of the energy deposition of the secondary electron generated by swift proton beams incident with energies T = 50 keV–5 MeV on poly(methylmethacrylate) (PMMA). Two different approaches have been used to model the electronic excitation spectrum of PMMA through its energy loss function (ELF), namely the extended-Drude ELF and the Mermin ELF. The singly differential cross section and the total cross section for ionization, as well as the average energy of the generated secondary electrons, show sizeable differences at T ⩽ 0.1 MeV when evaluated with these two ELF models. In order to know the radial distribution around the proton track of the energy deposited by the cascade of secondary electrons, a simulation has been performed that follows the motion of the electrons through the target taking into account both the inelastic interactions (via electronic ionizations and excitations as well as electron-phonon and electron trapping by polaron creation) and the elastic interactions. The radial distribution of the energy deposited by the secondary electrons around the proton track shows notable differences between the simulations performed with the extended-Drude ELF or the Mermin ELF, being the former more spread out (and, therefore, less peaked) than the latter. The highest intensity and sharpness of the deposited energy distributions takes place for proton beams incident with T ~ 0.1–1 MeV. We have also studied the influence in the radial distribution of deposited energy of using a full energy distribution of secondary electrons generated by proton impact or using a single value (namely, the average value of the distribution); our results show that differences between both simulations become important for proton energies larger than ~0.1 MeV. The results presented in this work have potential applications in materials science, as well as hadron therapy (due to the use of PMMA as a tissue phantom) in order to properly consider the generation of electrons by proton beams and their subsequent transport and energy deposition through the target in nanometric scales.
Resumo:
A statistical functional, such as the mean or the median, is called elicitable if there is a scoring function or loss function such that the correct forecast of the functional is the unique minimizer of the expected score. Such scoring functions are called strictly consistent for the functional. The elicitability of a functional opens the possibility to compare competing forecasts and to rank them in terms of their realized scores. In this paper, we explore the notion of elicitability for multi-dimensional functionals and give both necessary and sufficient conditions for strictly consistent scoring functions. We cover the case of functionals with elicitable components, but we also show that one-dimensional functionals that are not elicitable can be a component of a higher order elicitable functional. In the case of the variance, this is a known result. However, an important result of this paper is that spectral risk measures with a spectral measure with finite support are jointly elicitable if one adds the “correct” quantiles. A direct consequence of applied interest is that the pair (Value at Risk, Expected Shortfall) is jointly elicitable under mild conditions that are usually fulfilled in risk management applications.
Resumo:
The main purpose of this paper is to propose a methodology to obtain a hedge fund tail risk measure. Our measure builds on the methodologies proposed by Almeida and Garcia (2015) and Almeida, Ardison, Garcia, and Vicente (2016), which rely in solving dual minimization problems of Cressie Read discrepancy functions in spaces of probability measures. Due to the recently documented robustness of the Hellinger estimator (Kitamura et al., 2013), we adopt within the Cressie Read family, this specific discrepancy as loss function. From this choice, we derive a minimum Hellinger risk-neutral measure that correctly prices an observed panel of hedge fund returns. The estimated risk-neutral measure is used to construct our tail risk measure by pricing synthetic out-of-the-money put options on hedge fund returns of ten specific categories. We provide a detailed description of our methodology, extract the aggregate Tail risk hedge fund factor for Brazilian funds, and as a by product, a set of individual Tail risk factors for each specific hedge fund category.
Resumo:
A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.
Resumo:
Control design for stochastic uncertain nonlinear systems is traditionally based on minimizing the expected value of a suitably chosen loss function. Moreover, most control methods usually assume the certainty equivalence principle to simplify the problem and make it computationally tractable. We offer an improved probabilistic framework which is not constrained by these previous assumptions, and provides a more natural framework for incorporating and dealing with uncertainty. The focus of this paper is on developing this framework to obtain an optimal control law strategy using a fully probabilistic approach for information extraction from process data, which does not require detailed knowledge of system dynamics. Moreover, the proposed control method framework allows handling the problem of input-dependent noise. A basic paradigm is proposed and the resulting algorithm is discussed. The proposed probabilistic control method is for the general nonlinear class of discrete-time systems. It is demonstrated theoretically on the affine class. A nonlinear simulation example is also provided to validate theoretical development.
Resumo:
Direct quantile regression involves estimating a given quantile of a response variable as a function of input variables. We present a new framework for direct quantile regression where a Gaussian process model is learned, minimising the expected tilted loss function. The integration required in learning is not analytically tractable so to speed up the learning we employ the Expectation Propagation algorithm. We describe how this work relates to other quantile regression methods and apply the method on both synthetic and real data sets. The method is shown to be competitive with state of the art methods whilst allowing for the leverage of the full Gaussian process probabilistic framework.
Resumo:
2000 Mathematics Subject Classification: 62L10, 62L15.
Resumo:
This work concerns a refinement of a suboptimal dual controller for discrete time systems with stochastic parameters. The dual property means that the control signal is chosen so that estimation of the model parameters and regulation of the output signals are optimally balanced. The control signal is computed in such a way so as to minimize the variance of output around a reference value one step further, with the addition of terms in the loss function. The idea is add simple terms depending on the covariance matrix of the parameter estimates two steps ahead. An algorithm is used for the adaptive adjustment of the adjustable parameter lambda, for each step of the way. The actual performance of the proposed controller is evaluated through a Monte Carlo simulations method.
Resumo:
The exploration and development of oil and gas reserves located in harsh offshore environments are characterized with high risk. Some of these reserves would be uneconomical if produced using conventional drilling technology due to increased drilling problems and prolonged non-productive time. Seeking new ways to reduce drilling cost and minimize risks has led to the development of Managed Pressure Drilling techniques. Managed pressure drilling methods address the drawbacks of conventional overbalanced and underbalanced drilling techniques. As managed pressure drilling techniques are evolving, there are many unanswered questions related to safety and operating pressure regimes. Quantitative risk assessment techniques are often used to answer these questions. Quantitative risk assessment is conducted for the various stages of drilling operations – drilling ahead, tripping operation, casing and cementing. A diagnostic model for analyzing the rotating control device, the main component of managed pressure drilling techniques, is also studied. The logic concept of Noisy-OR is explored to capture the unique relationship between casing and cementing operations in leading to well integrity failure as well as its usage to model the critical components of constant bottom-hole pressure drilling technique of managed pressure drilling during tripping operation. Relevant safety functions and inherent safety principles are utilized to improve well integrity operations. Loss function modelling approach to enable dynamic consequence analysis is adopted to study blowout risk for real-time decision making. The aggregation of the blowout loss categories, comprising: production, asset, human health, environmental response and reputation losses leads to risk estimation using dynamically determined probability of occurrence. Lastly, various sub-models developed for the stages/sub-operations of drilling operations and the consequence modelling approach are integrated for a holistic risk analysis of drilling operations.