74 resultados para Mobile application testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: HIV testing is a cornerstone of efforts to combat the HIV epidemic, and testing conducted as part of surveillance provides invaluable data on the spread of infection and the effectiveness of campaigns to reduce the transmission of HIV. However, participation in HIV testing can be low, and if respondents systematically select not to be tested because they know or suspect they are HIV positive (and fear disclosure), standard approaches to deal with missing data will fail to remove selection bias. We implemented Heckman-type selection models, which can be used to adjust for missing data that are not missing at random, and established the extent of selection bias in a population-based HIV survey in an HIV hyperendemic community in rural South Africa.

Methods: We used data from a population-based HIV survey carried out in 2009 in rural KwaZulu-Natal, South Africa. In this survey, 5565 women (35%) and 2567 men (27%) provided blood for an HIV test. We accounted for missing data using interviewer identity as a selection variable which predicted consent to HIV testing but was unlikely to be independently associated with HIV status. Our approach involved using this selection variable to examine the HIV status of residents who would ordinarily refuse to test, except that they were allocated a persuasive interviewer. Our copula model allows for flexibility when modelling the dependence structure between HIV survey participation and HIV status.

Results: For women, our selection model generated an HIV prevalence estimate of 33% (95% CI 27–40) for all people eligible to consent to HIV testing in the survey. This estimate is higher than the estimate of 24% generated when only information from respondents who participated in testing is used in the analysis, and the estimate of 27% when imputation analysis is used to predict missing data on HIV status. For men, we found an HIV prevalence of 25% (95% CI 15–35) using the selection model, compared to 16% among those who participated in testing, and 18% estimated with imputation. We provide new confidence intervals that correct for the fact that the relationship between testing and HIV status is unknown and requires estimation.

Conclusions: We confirm the feasibility and value of adopting selection models to account for missing data in population-based HIV surveys and surveillance systems. Elements of survey design, such as interviewer identity, present the opportunity to adopt this approach in routine applications. Where non-participation is high, true confidence intervals are much wider than those generated by standard approaches to dealing with missing data suggest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the quality of the interface and edge bonded joints in layers of cross-laminated timber (CLT) panels. The shear performance was studied to assess the suitability of two different adhesives, Polyurethane (PUR) and Phenol-Resorcinol-Formaldehyde (PRF), and to determine the optimum clamping pressure. Since there is no established testing procedure to determine the shear strength of the surface bonds between layers in a CLT panel, block shear tests of specimens in two different configurations were carried out, and further shear tests of edge bonded specimen in two configurations were performed. Delamination tests were performed on samples which were subjected to accelerated aging to assess the durability of bonds in severe environmental conditions. Both tested adhesives produced boards with shear strength values within the edge bonding requirements of prEN 16351 for all manufacturing pressures. While the PUR specimens had higher shear strength values, the PRF specimens demonstrated superior durability characteristics in the delamination tests. It seems that the test protocol introduced in this study for crosslam bonded specimens, cut from a CLT panel, and placed in the shearing tool horizontally, accurately reflects the shearing strength of glue lines in CLT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report summarizes our results from security analysis covering all 57 competitions for authenticated encryption: security, applicability, and robustness (CAESAR) first-round candidates and over 210 implementations. We have manually identified security issues with three candidates, two of which are more serious, and these ciphers have been withdrawn from the competition. We have developed a testing framework, BRUTUS, to facilitate automatic detection of simple security lapses and susceptible statistical structures across all ciphers. From this testing, we have security usage notes on four submissions and statistical notes on a further four. We highlight that some of the CAESAR algorithms pose an elevated risk if employed in real-life protocols due to a class of adaptive-chosen-plaintext attacks. Although authenticated encryption with associated data are often defined (and are best used) as discrete primitives that authenticate and transmit only complete messages, in practice, these algorithms are easily implemented in a fashion that outputs observable ciphertext data when the algorithm has not received all of the (attacker-controlled) plaintext. For an implementor, this strategy appears to offer seemingly harmless and compliant storage and latency advantages. If the algorithm uses the same state for secret keying information, encryption, and integrity protection, and the internal mixing permutation is not cryptographically strong, an attacker can exploit the ciphertext–plaintext feedback loop to reveal secret state information or even keying material. We conclude that the main advantages of exhaustive, automated cryptanalysis are that it acts as a very necessary sanity check for implementations and gives the cryptanalyst insights that can be used to focus more specific attack methods on given candidates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Those living with an acquired brain injury often have issues with fatigue due to factors resulting from the injury. Cognitive impairments such as lack of memory, concentration and planning have a great impact on an individual’s ability to carry out general everyday tasks, which subsequently has the effect of inducing cognitive fatigue. Moreover, there is difficulty in assessing cognitive fatigue, as there are no real biological markers that can be measured. Rather, it is a very subjective effect that can only be diagnosed by the individual. Consequently, the traditional way of assessing cognitive fatigue is to use a self-assessment questionnaire that is able to determine contributing factors. State of the art methods to evaluate cognitive! fa tigue employ cognitive tests in order to analyse performance on predefined tasks. However, one primary issue with such tests is that they are typically carried out in a clinical environment, therefore do not have the ability to be utilized in situ within everyday life. This paper presents a smartphone application for the evaluation of fatigue, which can be used daily to track cognitive performance in order to assess the influence of fatigue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The environmental quality of land is often assessed by the calculation of threshold values which aim to differentiate between concentrations of elements based on whether the soils are in residential or industrial sites. In Europe, for example, soil guideline values exist for agricultural and grazing land. A threshold is often set to differentiate between concentrations of the element that naturally occur in the soil and concentrations that result from diffuse anthropogenic sources. Regional geochemistry and, in particular, single component geochemical maps are increasingly being used to determine these baseline environmental assessments. The key question raised in this paper is whether the geochemical map can provide an accurate interpretation on its own. Implicit is the thought that single component geochemical maps represent absolute abundances. However,because of the compositional (closed) nature of the data univariate geochemical maps cannot be compared directly with one another.. As a result, any interpretation based on them is vulnerable to spurious correlation problems. What does this mean for soil geochemistry mapping, baseline quality documentation, soil resource assessment or risk evaluation? Despite the limitation of relative abundances, individual raw geochemical maps are deemed fundamental to several applications of geochemical maps including environmental assessments. However, element toxicity is related to its bioavailable concentration, which is lowered if its source is mixed with another source. Elements interact, for example under reducing conditions with iron oxides, its solid state is lost and arsenic becomes soluble and mobile. Both of these matters may be more adequately dealt with if a single component map is not interpreted in isolation to determine baseline and threshold assessments. A range of alternative compositionally compliant representations based on log-ratio and log-contrast approaches are explored to supplement the classical single component maps for environmental assessment. Case study examples are shown based on the Tellus soil geochemical dataset, covering Northern Ireland and the results of in vitro oral bioaccessibility testing carried out on a sub-set of archived Tellus Survey shallow soils following the Unified BARGE (Bioaccessibility Research Group of Europe).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Clostridium difficile (C. difficile) is a leading cause of infectious diarrhoea in hospitals. Sending faecal samples for testing expedites diagnosis and appropriate treatment. Clinical suspicion of C. difficile based on patient history, signs and symptoms is the basis for sampling. Sending faecal samples from patients with diarrhoea ‘just in case’ the patient has C. difficile may be an indication of poor clinical management.

Aim: To evaluate the effectiveness of an intervention by an Infection Prevention and Control Team (IPCT) in reducing inappropriate faecal samples sent for C. difficile testing.

Method: An audit of numbers of faecal samples sent before and after a decision-making algorithm was introduced. The number of samples received in the laboratory was retrospectively counted for 12-week periods before and after an algorithm was introduced.
Findings: There was a statistically significant reduction in the mean number of faecal samples sent post the algorithm. Results were compared to a similar intervention carried out in 2009 in which the same message was delivered by a memorandum. In 2009 the memorandum had no effect on the overall number of weekly samples being sent.

Conclusion: An algorithm intervention had an effect on the number of faecal samples being sent for C. difficile testing and thus contributed to the effective use of the laboratory service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seafloor massive sulfide (SMS) mining will likely occur at hydrothermal systems in the near future. Alongside their mineral wealth, SMS deposits also have considerable biological value. Active SMS deposits host endemic hydrothermal vent communities, whilst inactive deposits support communities of deep water corals and other suspension feeders. Mining activities are expected to remove all large organisms and suitable habitat in the immediate area, making vent endemic organisms particularly at risk from habitat loss and localised extinction. As part of environmental management strategies designed to mitigate the effects of mining, areas of seabed need to be protected to preserve biodiversity that is lost at the mine site and to preserve communities that support connectivity among populations of vent animals in the surrounding region. These "set-aside" areas need to be biologically similar to the mine site and be suitably connected, mostly by transport of larvae, to neighbouring sites to ensure exchange of genetic material among remaining populations. Establishing suitable set-asides can be a formidable task for environmental managers, however the application of genetic approaches can aid set-aside identification, suitability assessment and monitoring. There are many genetic tools available, including analysis of mitochondrial DNA (mtDNA) sequences (e.g. COI or other suitable mtDNA genes) and appropriate nuclear DNA markers (e.g. microsatellites, single nucleotide polymorphisms), environmental DNA (eDNA) techniques and microbial metagenomics. When used in concert with traditional biological survey techniques, these tools can help to identify species, assess the genetic connectivity among populations and assess the diversity of communities. How these techniques can be applied to set-aside decision making is discussed and recommendations are made for the genetic characteristics of set-aside sites. A checklist for environmental regulators forms a guide to aid decision making on the suitability of set-aside design and assessment using genetic tools. This non-technical primer document represents the views of participants in the VentBase 2014 workshop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a dearth of evidence focusing on student preferences for computer-based testing versus
testing via student response systems for summative assessment in undergraduate education.
This quantitative study compared the preference and acceptability of computer-based testing
and a student response system for completing multiple choice questions in undergraduate
nursing education. After using both computer-based testing and a student response system to
complete multiple choice questions, 192 first year undergraduate nursing students rated their
preferences and attitudes towards using computer-based testing and a student response system.
Results indicated that seventy four percent felt the student response system was easy to use.
Fifty six percent felt the student response system took more time than the computer-based testing
to become familiar with. Sixty Percent felt computer-based testing was more users friendly.
Seventy Percent of students would prefer to take a multiple choice question summative exam
via computer-based testing, although Fifty percent would be happy to take using student response
system. Results are useful for undergraduate educators in relation to student’s preference
for using computer-based testing or student response system to undertake a summative
multiple choice question exam

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By testing a simple asset pricing model of heterogeneous agents to characterize the power-law behavior of the DAX 30 from 1975 to 2007, we provide supporting evidence on empirical findings that investors and fund managers use combinations of fixed and switching strategies based on fundamental and technical analysis when making investment decisions. By conducting econometric analysis via Monte Carlo simulations, we show that the autocorrelation patterns, the estimates of the power-law decay indices, (FI)GARCH parameters, and tail index of the model match closely the corresponding estimates for the DAX 30. A mechanism analysis based on the calibrated model provides further insights into the explanatory power of heterogeneous agent models.