975 resultados para profitability calculation
Resumo:
The farm-gate value of extensive beef production from the northern Gulf region of Queensland, Australia, is ~$150 million annually. Poor profitability and declining equity are common issues for most beef businesses in the region. The beef industry relies primarily on native pasture systems and studies continue to report a decline in the condition and productivity of important land types in the region. Governments and Natural Resource Management groups are investing significant resources to restore landscape health and productivity. Fundamental community expectations also include broader environmental outcomes such as reducing beef industry greenhouse gas emissions. Whole-of-business analysis results are presented from 18 extensive beef businesses (producers) to highlight the complex social and economic drivers of management decisions that impact on the natural resource and environment. Business analysis activities also focussed on improving enterprise performance. Profitability, herd performance and greenhouse emission benchmarks are documented and discussed.
Resumo:
Background: Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS-) model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods: The anatomical geometries of the MBS-model have been established using computer tomography- (CT-) and magnetic resonance imaging- (MRI-) data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s) on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion: As a result the vertical ground reaction forces (z-direction) calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion: In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in silico development and testing of hip prostheses.
Resumo:
International audience
Resumo:
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome
Resumo:
This paper aims first to show the effect of the Entrepreneurial Orientation (EO) on SMEs financial performance, and second, to propose a contingency model which explores the moderating effects of environmental hostility of the relationship EO –financial performance -- To examine the research hypotheses, a sample of 121 manufacturing SMEs located in Catalonia, Spain has been used -- The results confirm a positive EO-financial performance relation, and suggest that a more positive relation exists when there is an adjustment between the EO and the environment -- Finally, the academic and entrepreneurial implications related to the EO and the SMEs environment are presented and discussed
Resumo:
We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.
Resumo:
Increases in oil prices after the economic recession have been surprising for domestic oil production in the United States since the beginning of 2009. Not only did the conventional oil extraction increase, but unconventional oil production and exploration also improved greatly with the favorable economic conditions. This favorable economy encourages companies to invest in new reservoirs and technological developments. Recently, enhanced drilling techniques including hydraulic fracturing and horizontal drilling have been supporting the domestic economy by way of unconventional shale and tight oil from various U.S. locations. One of the main contributors to this oil boom is the unconventional oil production from the North Dakota Bakken field. Horizontal drilling has increased oil production in the Bakken field, but the economic issues of unconventional oil extraction are still debatable due to volatile oil prices, high decline rates of production, a limited production period, high production costs, and lack of transportation. The economic profitability and viability of the unconventional oil play in the North Dakota Bakken was tested with an economic analysis of average Bakken unconventional well features. Scenario analysis demonstrated that a typical North Dakota Bakken unconventional oil well is profitable and viable as shown by three financial metrics; net present value, internal rate of return, and break-even prices.
Calculation of mutual information for nonlinear communication channel at large signal-to-noise ratio
Resumo:
Using the path-integral technique we examine the mutual information for the communication channel modeled by the nonlinear Schrödinger equation with additive Gaussian noise. The nonlinear Schrödinger equation is one of the fundamental models in nonlinear physics, and it has a broad range of applications, including fiber optical communications - the backbone of the internet. At large signal-to-noise ratio we present the mutual information through the path-integral, which is convenient for the perturbative expansion in nonlinearity. In the limit of small noise and small nonlinearity we derive analytically the first nonzero nonlinear correction to the mutual information for the channel.
Resumo:
Purpose: to determine whether pupil dilation affects biometric measurements and intraocular lens (IOL) power calculation made using the new swept-source optical coherence tomography-based optical biometer (IOLMaster 700©; Carl Zeiss Meditec, Jena, Germany). Procedures: eighty-one eyes of 81 patients evaluated for cataract surgery were prospectively examined using the IOLMaster 700© before and after pupil dilation with tropicamide 1%. The measurements made were: axial length (AL), central corneal thickness (CCT), aqueous chamber depth (ACD), lens thickness (LT), mean keratometry (MK), white-to-white distance (WTW) and pupil diameter (PD). Holladay II and SRK/T formulas were used to calculate IOL power. Agreement between measurement modes (with and without dilation) was assessed through intraclass correlation coefficients (ICC) and Bland-Altman plots. Results: mean patient age was 75.17 ± 7.54 years (range: 57–92). Of the variables determined, CCT, ACD, LT and WTW varied significantly according to pupil dilation. Excellent intraobserver correlation was observed between measurements made before and after pupil dilation. Mean IOL power calculation using the Holladay 2 and SRK/T formulas were unmodified by pupil dilation. Conclusions: the use of pupil dilation produces statistical yet not clinically significant differences in some IOLMaster 700© measurements. However, it does not affect mean IOL power calculation.
Resumo:
Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.
Resumo:
The goal of this simulation thesis is to present a tool for studying and eliminating various numerical problems observed while analyzing the behavior of the MIND cable during fast voltage polarity reversal. The tool is built on the MATLAB environment, where several simulations were run to achieve oscillation-free results. This thesis will add to earlier research on HVDC cables subjected to polarity reversals. Initially, the code does numerical simulations to analyze the electric field and charge density behavior of a MIND cable for certain scenarios such as before, during, and after polarity reversal. However, the primary goal is to reduce numerical oscillations from the charge density profile. The generated code is notable for its usage of the Arithmetic Mean Approach and the Non-Uniform Field Approach for filtering and minimizing oscillations even under time and temperature variations.
Resumo:
In this thesis, we perform a next-to-leading order calculation of the impact of primordial magnetic fields (PMF) into the evolution of scalar cosmological perturbations and the cosmic microwave background (CMB) anisotropy. Magnetic fields are everywhere in the Universe at all scales probed so far, but their origin is still under debate. The current standard picture is that they originate from the amplification of initial seed fields, which could have been generated as PMFs in the early Universe. The most robust way to test their presence and constrain their features is to study how they impact on key cosmological observables, in particular the CMB anisotropies. The standard way to model a PMF is to consider its contribution (quadratic in the magnetic field) at the same footing of first order perturbations, under the assumptions of ideal magneto-hydrodynamics and compensated initial conditions. In the perspectives of ever increasing precision of CMB anisotropies measurements and of possible uncounted non-linear effects, in this thesis we study effects which go beyond the standard assumptions. We study the impact of PMFs on cosmological perturbations and CMB anisotropies with adiabatic initial conditions, the effect of Alfvén waves on the speed of sound of perturbations and possible non-linear behavior of baryon overdensity for PMFs with a blue spectral index, by modifying and improving the publicly available Einstein-Boltzmann code SONG, which has been written in order to take into account all second-order contributions in cosmological perturbation theory. One of the objectives of this thesis is to set the basis to verify by an independent fully numerical analysis the possibility to affect recombination and the Hubble constant.