930 resultados para Wald’s sequential probability ratio test
Resumo:
Background: Queensland men aged 50 years and older are at high risk for melanoma. Early detection via skin self examination (SSE) (particularly whole-body SSE) followed by presentation to a doctor with suspicious lesions, may decrease morbidity and mortality from melanoma. Prevalence of whole-body SSE (wbSSE) is lower in Queensland older men compared to other population subgroups. With the exception of the present study no previous research has investigated the determinants of wbSSE in older men, or interventions to increase the behaviour in this population. Furthermore, although past SSE intervention studies for other populations have cited health behaviour models in the development of interventions, no study has tested these models in full. The Skin Awareness Study: A recent randomised trial, called the Skin Awareness Study, tested the impact of a video-delivered intervention compared to written materials alone on wbSSE in men aged 50 years or older (n=930). Men were recruited from the general population and interviewed over the telephone at baseline and 13 months. The proportion of men who reported wbSSE rose from 10% to 31% in the control group, and from 11% to 36% in the intervention group. Current research: The current research was a secondary analysis of data collected for the Skin Awareness Study. The objectives were as follows: • To describe how men who did not take up any SSE during the study period differed from those who did take up examining their skin. • To determine whether the intervention program was successful in affecting the constructs of the Health Belief Model it was aimed at (self-efficacy, perceived threat, and outcome expectations); and whether this in turn influenced wbSSE. • To determine whether the Health Action Process Approach (HAPA) was a better predictor of wbSSE behaviour compared to the Health Belief Model (HBM). Methods: For objective 1, men who did not report any past SSE at baseline (n=308) were categorised as having ‘taken up SSE’ (reported SSE at study end) or ‘resisted SSE’ (reported no SSE at study end). Bivariate logistic regression, followed by multivariable regression, investigated the association between participant characteristics measured at baseline and resisting SSE. For objective 2 proxy measures of self-efficacy, perceived threat, and outcome expectations were selected. To determine whether these mediated the effect of the intervention on the outcome, a mediator analysis was performed with all participants who completed interviews at both time points (n=830) following the Baron and Kenny approach, modified for use with structural equation modelling (SEM). For objective 3, control group participants only were included (n=410). Proxy measures of all HBM and HAPA constructs were selected and SEM was used to build up models and test the significance of each hypothesised pathway. A likelihood ratio test compared the HAPA to the HBM. Results: Amongst men who did not report any SSE at baseline, 27% did not take up any SSE by the end of the study. In multivariable analyses, resisting SSE was associated with having more freckly skin (p=0.027); being unsure about the statement ‘if I saw something suspicious on my skin, I’d go to the doctor straight away’ (p=0.028); not intending to perform SSE (p=0.015), having lower SSE self-efficacy (p<0.001), and having no recommendation for SSE from a doctor (p=0.002). In the mediator analysis none of the tested variables mediated the relationship between the intervention and wbSSE. In regards to health behaviour models, the HBM did not predict wbSSE well overall. Only the construct of self-efficacy was a significant predictor of future wbSSE (p=0.001), while neither perceived threat (p=0.584) nor outcome expectations (p=0.220) were. By contrast, when the HAPA constructs were added, all three HBM variables predicted intention to perform SSE, which in turn predicted future behaviour (p=0.015). The HAPA construct of volitional self-efficacy was also associated with wbSSE (p=0.046). The HAPA was a significantly better model compared to the HBM (p<0.001). Limitations: Items selected to measure HBM and HAPA model constructs for objectives 2 and 3 may not have accurately reflected each construct. Conclusions: This research added to the evidence base on how best to target interventions to older men; and on the appropriateness of particular health behaviour models to guide interventions. Findings indicate that to overcome resistance those men with more negative pre-existing attitudes to SSE (not intending to do it, lower initial self-efficacy) may need to be targeted with more intensive interventions in the future. Involving general practitioners in recommending SSE to their patients in this population, alongside disseminating an intervention, may increase its success. Comparison of the HBM and HAPA showed that while two of the three HBM variables examined did not directly predict future wbSSE, all three were associated with intention to self-examine skin. This suggests that in this population, intervening on these variables may increase intention to examine skin, but not necessarily the behaviour itself. Future interventions could potentially focus on increasing both the motivational variables of perceived threat and outcome expectations as well as a combination of both action and volitional self-efficacy; with the aim of increasing intention as well as its translation to taking up and maintaining regular wbSSE.
Resumo:
Motorcycles are particularly vulnerable in right-angle crashes at signalized intersections. The objective of this study is to explore how variations in roadway characteristics, environmental factors, traffic factors, maneuver types, human factors as well as driver demographics influence the right-angle crash vulnerability of motorcycles at intersections. The problem is modeled using a mixed logit model with a binary choice category formulation to differentiate how an at-fault vehicle collides with a not-at-fault motorcycle in comparison to other collision types. The mixed logit formulation allows randomness in the parameters and hence takes into account the underlying heterogeneities potentially inherent in driver behavior, and other unobserved variables. A likelihood ratio test reveals that the mixed logit model is indeed better than the standard logit model. Night time riding shows a positive association with the vulnerability of motorcyclists. Moreover, motorcyclists are particularly vulnerable on single lane roads, on the curb and median lanes of multi-lane roads, and on one-way and two-way road type relative to divided-highway. Drivers who deliberately run red light as well as those who are careless towards motorcyclists especially when making turns at intersections increase the vulnerability of motorcyclists. Drivers appear more restrained when there is a passenger onboard and this has decreased the crash potential with motorcyclists. The presence of red light cameras also significantly decreases right-angle crash vulnerabilities of motorcyclists. The findings of this study would be helpful in developing more targeted countermeasures for traffic enforcement, driver/rider training and/or education, safety awareness programs to reduce the vulnerability of motorcyclists.
Resumo:
In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.
Resumo:
The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
Resumo:
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
Resumo:
Many economic events involve initial observations that substantially deviate from long-run steady state. Initial conditions of this type have been found to impact diversely on the power of univariate unit root tests, whereas the impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on tests for cointegration rank. We compare the local power of the widely used likelihood ratio (LR) test with the local power of a test based on the eigenvalues of the companion matrix. We find that the power of the LR test is increasing in the magnitude of the initial condition, whereas the power of the other test is decreasing. The behaviour of the tests is investigated in an application to price convergence.
Resumo:
Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.
Resumo:
Speech enhancement in stationary noise is addressed using the ideal channel selection framework. In order to estimate the binary mask, we propose to classify each time-frequency (T-F) bin of the noisy signal as speech or noise using Discriminative Random Fields (DRF). The DRF function contains two terms - an enhancement function and a smoothing term. On each T-F bin, we propose to use an enhancement function based on likelihood ratio test for speech presence, while Ising model is used as smoothing function for spectro-temporal continuity in the estimated binary mask. The effect of the smoothing function over successive iterations is found to reduce musical noise as opposed to using only enhancement function. The binary mask is inferred from the noisy signal using Iterated Conditional Modes (ICM) algorithm. Sentences from NOIZEUS corpus are evaluated from 0 dB to 15 dB Signal to Noise Ratio (SNR) in 4 kinds of additive noise settings: additive white Gaussian noise, car noise, street noise and pink noise. The reconstructed speech using the proposed technique is evaluated in terms of average segmental SNR, Perceptual Evaluation of Speech Quality (PESQ) and Mean opinion Score (MOS).
Resumo:
7 p.
Resumo:
182 p. : il.
Resumo:
The carpenter seabream (Argyrozona argyrozona) is an endemic South African sparid that comprises an important part of the handline fishery. A three-year study (1998−2000) into its reproductive biology within the Tsitsikamma National Park revealed that these fishes are serial spawning late gonochorists. The size at 50% maturity (L50) was estimated at 292 and 297 mm FL for both females and males, respectively. A likelihood ratio test revealed that there was no significant difference between male and female L50 (P>0.5). Both monthly gonadosomatic indices and macroscopically determined ovarian stages strongly indicate that A. argyrozona within the Tsitsikamma National Park spawn in the astral summer between November and April. The presence of postovulatory follicles (POFs) confirmed a six-month spawning season, and monthly proportions of early (0−6 hour old) POFs showed that spawning frequency was highest (once every 1−2 days) from December to March. Although spawning season was more highly correlated to photoperiod (r = 0.859) than temperature (r = −0.161), the daily proportion of spawning fish was strongly correlated (r= 0.93) to ambient temperature over the range 9−22oC. These results indicate that short-term upwelling events, a strong feature in the Tsitsikamma National Park during summer, may negatively affect carpenter fecundity. Both spawning frequency and duration (i.e., length of spawning season) increased with fish length. As a result of the allometric relationship between annual fecundity and fish mass a 3-kg fish was calculated to produce fivefold more eggs per kilogram of body weight than a fish of 1 kg. In addition to producing more eggs per unit of weight each year, larger fish also produce significantly larger eggs.
Resumo:
This paper presents a study which linked demographic variables with barriers affecting the adoption of domestic energy efficiency measures in large UK cities. The aim was to better understand the 'Energy Efficiency Gap' and improve the effectiveness of future energy efficiency initiatives. The data for this study was collected from 198 general population interviews (1.5-10 min) carried out across multiple locations in Manchester and Cardiff. The demographic variables were statistically linked to the identified barriers using a modified chi-square test of association (first order Rao-Scott corrected to compensate for multiple response data), and the effect size was estimated with an odds-ratio test. The results revealed that strong associations exist between demographics and barriers, specifically for the following variables: sex; marital status; education level; type of dwelling; number of occupants in household; residence (rent/own); and location (Manchester/Cardiff). The results and recommendations were aimed at city policy makers, local councils, and members of the construction/retrofit industry who are all working to improve the energy efficiency of the domestic built environment. © 2012 Elsevier Ltd.