963 resultados para cumulative sum
Resumo:
In the TREC Web Diversity track, novelty-biased cumulative gain (α-NDCG) is one of the official measures to assess retrieval performance of IR systems. The measure is characterised by a parameter, α, the effect of which has not been thoroughly investigated. We find that common settings of α, i.e. α=0.5, may prevent the measure from behaving as desired when evaluating result diversification. This is because it excessively penalises systems that cover many intents while it rewards those that redundantly cover only few intents. This issue is crucial since it highly influences systems at top ranks. We revisit our previously proposed threshold, suggesting α be set on a query-basis. The intuitiveness of the measure is then studied by examining actual rankings from TREC 09-10 Web track submissions. By varying α according to our query-based threshold, the discriminative power of α-NDCG is not harmed and in fact, our approach improves α-NDCG's robustness. Experimental results show that the threshold for α can turn the measure to be more intuitive than using its common settings.
Resumo:
Novelty-biased cumulative gain (α-NDCG) has become the de facto measure within the information retrieval (IR) community for evaluating retrieval systems in the context of sub-topic retrieval. Setting the incorrect value of parameter α in α-NDCG prevents the measure from behaving as desired in particular circumstances. In fact, when α is set according to common practice (i.e. α = 0.5), the measure favours systems that promote redundant relevant sub-topics rather than provide novel relevant ones. Recognising this characteristic of the measure is important because it affects the comparison and the ranking of retrieval systems. We propose an approach to overcome this problem by defining a safe threshold for the value of α on a query basis. Moreover, we study its impact on system rankings through a comprehensive simulation.
Resumo:
Cumulative arrays have played an important role in the early development of the secret sharing theory. They have not been subject to extensive study so far, as the secret sharing schemes built on them generally result in much larger sizes of shares, when compared with other conventional approaches. Recent works in threshold cryptography show that cumulative arrays may be the appropriate building blocks in non-homomorphic threshold cryptosystems where the conventional secret sharing methods are generally of no use. In this paper we study several extensions of cumulative arrays and show that some of these extensions significantly improve the performance of conventional cumulative arrays. In particular, we derive bounds on generalised cumulative arrays and show that the constructions based on perfect hash families are asymptotically optimal. We also introduce the concept of ramp perfect hash families as a generalisation of perfect hash families for the study of ramp secret sharing schemes and ramp cumulative arrays.
Resumo:
Universal One-Way Hash Functions (UOWHFs) may be used in place of collision-resistant functions in many public-key cryptographic applications. At Asiacrypt 2004, Hong, Preneel and Lee introduced the stronger security notion of higher order UOWHFs to allow construction of long-input UOWHFs using the Merkle-Damgård domain extender. However, they did not provide any provably secure constructions for higher order UOWHFs. We show that the subset sum hash function is a kth order Universal One-Way Hash Function (hashing n bits to m < n bits) under the Subset Sum assumption for k = O(log m). Therefore we strengthen a previous result of Impagliazzo and Naor, who showed that the subset sum hash function is a UOWHF under the Subset Sum assumption. We believe our result is of theoretical interest; as far as we are aware, it is the first example of a natural and computationally efficient UOWHF which is also a provably secure higher order UOWHF under the same well-known cryptographic assumption, whereas this assumption does not seem sufficient to prove its collision-resistance. A consequence of our result is that one can apply the Merkle-Damgård extender to the subset sum compression function with ‘extension factor’ k+1, while losing (at most) about k bits of UOWHF security relative to the UOWHF security of the compression function. The method also leads to a saving of up to m log(k+1) bits in key length relative to the Shoup XOR-Mask domain extender applied to the subset sum compression function.
Resumo:
Aspects of Keno modelling throughout the Australian states of Queensland, New South Wales and Victoria are discussed: the trivial Heads or Tails and the more interesting Keno Bonus, which leads to consideration of the subset sum problem. The most intricate structure is where Heads or Tails and Keno Bonus are combined, and here, the issue of independence arises. Closed expressions for expected return to player are presented in each case.
Resumo:
In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.
Resumo:
Introduction This research evaluated the effect of tendinopathy on the cumulative transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise. Methods Nine adults with unilateral patellar tendinopathy (age 18.2±0.7 years, height 1.92±0.06 m and weight 76.8±6.8 kg) and ten healthy adults free of knee pain (age 17.8±0.8 years, height 1.83±0.05 m and weight 73.2±7.6 kg) underwent standardised sagittal sonograms (7.2–14 MHz linear–array transducer) of both patellar tendons immediately prior and following 45 repetitions of a double–leg decline–squat exercise performed against a resistance of 145% bodyweight. Tendon thickness was determined 5–mm and 25–mm distal to the patellar pole. Transverse Hencky strain was calculated as the natural log of the ratio of post– to pre–exercise tendon thickness and expressed as a percentage. Measures of tendon echogenicity were calculated within the superficial and deep aspects of each tendon site from gray–scale profiles. Intratendinous microvessels were evaluated using power Doppler ultrasound. Results The cumulative transverse strain response to exercise in symptomatic tendinopathy was significantly lower than that of asymptomatic and healthy tendon (P<.05). There was also a significant reduction (57%) in the area of microvascularity immediately following exercise (P=.05), which was positively correlated (r=0.93, P<.05) with VISA-P score. Conclusions This study is the first to show that patellar tendinopathy is associated with an altered morphological and mechanical response of the tendon to exercise, which is manifest by a reduction in cumulative transverse strain and microvascularity, when present. Research directed toward identifying factors that influence the acute microvascular and transverse strain response of the patellar tendon to exercise in the various stages of tendinopathy is warranted.
Resumo:
The literature to date shows that children from poorer households tend to have worse health than their peers, and the gap between them grows with age. We investigate whether and how health shocks (as measured by the onset of chronic conditions) contribute to the income–child health gradient and whether the contemporaneous or cumulative effects of income play important mitigating roles. We exploit a rich panel dataset with three panel waves called the Longitudinal Study of Australian children. Given the availability of three waves of data, we are able to apply a range of econometric techniques (e.g. fixed and random effects) to control for unobserved heterogeneity. The paper makes several contributions to the extant literature. First, it shows that an apparent income gradient becomes relatively attenuated in our dataset when the cumulative and contemporaneous effects of household income are distinguished econometrically. Second, it demonstrates that the income–child health gradient becomes statistically insignificant when controlling for parental health and health-related behaviours or unobserved heterogeneity.
Resumo:
Purpose of this paper This research aims to examine the effects of inadequate documentation to the cost management & tendering processes in Managing Contractor Contracts using Fixed Lump Sum as a benchmark. Design/methodology/approach A questionnaire survey was conducted with industry practitioners to solicit their views on documentation quality issues associated with the construction industry. This is followed by a series of semi-structured interviews with a purpose of validating survey findings. Findings and value The results showed that documentation quality remains a significant issue, contributing to the industries inefficiency and poor reputation. The level of satisfaction for individual attributes of documentation quality varies. Attributes that do appear to be affected by the choice of procurement method include coordination, build ability, efficiency, completeness and delivery time. Similarly the use and effectiveness of risk mitigation techniques appears to vary between the methods, based on a number of factors such as documentation completeness, early involvement, fast tracking etc. Originality/value of paper This research fills the gap of existing body of knowledge in terms of limited studies on the choice of a project procurement system has an influence on the documentation quality and the level of impact. Conclusions Ultimately research concludes that the entire project team including the client and designers should carefully consider the individual projects requirements and compare those to the trade-offs associated with documentation quality and the procurement method. While documentation quality is definitely an issue to be improved upon, by identifying the projects performance requirements a procurement method can be chosen to maximise the likelihood that those requirements will be met. This allows the aspects of documentation quality considered most important to the individual project to be managed appropriately.
Resumo:
The speed at which target pictures are named increases monotonically as a function of prior retrieval of other exemplars of the same semantic category and is unaffected by the number of intervening items. This cumulative semantic interference effect is generally attributed to three mechanisms: shared feature activation, priming and lexical-level selection. However, at least two additional mechanisms have been proposed: (1) a 'booster' to amplify lexical-level activation and (2) retrieval-induced forgetting (RIF). In a perfusion functional Magnetic Resonance Imaging (fMRI) experiment, we tested hypotheses concerning the involvement of all five mechanisms. Our results demonstrate that the cumulative interference effect is associated with perfusion signal changes in the left perirhinal and middle temporal cortices that increase monotonically according to the ordinal position of exemplars being named. The left inferior frontal gyrus (LIFG) also showed significant perfusion signal changes across ordinal presentations; however, these responses did not conform to a monotonically increasing function. None of the cerebral regions linked with RIF in prior neuroimaging and modelling studies showed significant effects. This might be due to methodological differences between the RIF paradigm and continuous naming as the latter does not involve practicing particular information. We interpret the results as indicating priming of shared features and lexical-level selection mechanisms contribute to the cumulative interference effect, while adding noise to a booster mechanism could account for the pattern of responses observed in the LIFG.
Resumo:
Public private partnerships (PPPs) have been adopted widely to provide public facilities and services. According to the PPP agreement, PPP projects would be transferred to the public sector. However, problems related to the subsequent management of ongoing PPP projects have not been studied thoroughly. Residual value risk (RVR) can occur if the public sector cannot obtain the project in the desired conditions as required in the agreement when a project is being transferred. RVR has been identified as an important risk in PPPs and has greatly influenced the outputs of the projects. In order to further observe the change of residual value (RV) during the process of PPP projects and to reveal the internal mechanism for reducing the RVR, a comparative case study of two PPP projects in mainland China and Hong Kong was conducted. Based on the case study, different factors leading to RVR and a series of key risk indicators (KRIs) were identified. The comparison demonstrates that RVR is an important risk that could influence the success of PPP projects. The cumulative effects during the concession period can play significant roles in the occurrence of RVR. Additionally, the cumulative effects in different cases can make the RVR different because of different stakeholders’ efforts on the projects and ways to treat RVR. Finally, alternatives for the public sector to treat RVR were proposed. The findings of this research can reduce RVR and improve the performance of PPP projects.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.