750 resultados para zero-inflated data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent data indicate that levels of overweight and obesity are increasing at an alarming rate throughout the world. At a population level (and commonly to assess individual health risk), the prevalence of overweight and obesity is calculated using cut-offs of the Body Mass Index (BMI) derived from height and weight. Similarly, the BMI is also used to classify individuals and to provide a notional indication of potential health risk. It is likely that epidemiologic surveys that are reliant on BMI as a measure of adiposity will overestimate the number of individuals in the overweight (and slightly obese) categories. This tendency to misclassify individuals may be more pronounced in athletic populations or groups in which the proportion of more active individuals is higher. This differential is most pronounced in sports where it is advantageous to have a high BMI (but not necessarily high fatness). To illustrate this point we calculated the BMIs of international professional rugby players from the four teams involved in the semi-finals of the 2003 Rugby Union World Cup. According to the World Health Organisation (WHO) cut-offs for BMI, approximately 65% of the players were classified as overweight and approximately 25% as obese. These findings demonstrate that a high BMI is commonplace (and a potentially desirable attribute for sport performance) in professional rugby players. An unanswered question is what proportion of the wider population, classified as overweight (or obese) according to the BMI, is misclassified according to both fatness and health risk? It is evident that being overweight should not be an obstacle to a physically active lifestyle. Similarly, a reliance on BMI alone may misclassify a number of individuals who might otherwise have been automatically considered fat and/or unfit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a singularly perturbed ordinary differential equation with non-smooth data is considered. The numerical method is generated by means of a Petrov-Galerkin finite element method with the piecewise-exponential test function and the piecewise-linear trial function. At the discontinuous point of the coefficient, a special technique is used. The method is shown to be first-order accurate and singular perturbation parameter uniform convergence. Finally, numerical results are presented, which are in agreement with theoretical results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high degree of variability and inconsistency in cash flow study usage by property professionals demands improvement in knowledge and processes. Until recently limited research was being undertaken on the use of cash flow studies in property valuations but the growing acceptance of this approach for major investment valuations has resulted in renewed interest in this topic. Studies on valuation variations identify data accuracy, model consistency and bias as major concerns. In cash flow studies there are practical problems with the input data and the consistency of the models. This study will refer to the recent literature and identify the major factors in model inconsistency and data selection. A detailed case study will be used to examine the effects of changes in structure and inputs. The key variable inputs will be identified and proposals developed to improve the selection process for these key variables. The variables will be selected with the aid of sensitivity studies and alternative ways of quantifying the key variables explained. The paper recommends, with reservations, the use of probability profiles of the variables and the incorporation of this data in simulation exercises. The use of Monte Carlo simulation is demonstrated and the factors influencing the structure of the probability distributions of the key variables are outline. This study relates to ongoing research into functional performance of commercial property within an Australian Cooperative Research Centre.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new solution to the millionaire problem is designed on the base of two new techniques: zero test and batch equation. Zero test is a technique used to test whether one or more ciphertext contains a zero without revealing other information. Batch equation is a technique used to test equality of multiple integers. Combination of these two techniques produces the only known solution to the millionaire problem that is correct, private, publicly verifiable and efficient at the same time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a range test, one party holds a ciphertext and needs to test whether the message encrypted in the ciphertext is within a certain interval range. In this paper, a range test protocol is proposed, where the party holding the ciphertext asks another party holding the private key of the encryption algorithm to help him. These two parties run the protocol to implement the test. The test returns TRUE if and only if the encrypted message is within the certain interval range. If the two parties do not conspire, no information about the encrypted message is revealed from the test except what can be deduced from the test result. Advantages of the new protocol over the existing related techniques are that it achieves correctness, soundness, °exibility, high e±ciency and privacy simultaneously.