12 resultados para inaccuracy

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer simulations were used to test the effect of increasing phylogenetic topological inaccuracy on the results obtained from correlation tests of independent contrasts. Predictably, increasing the number of disruptions in the tree increases the likelihood of significant error in the r values produced and in the statistical conclusions drawn from the analysis. However, the position of the disruption in the tree is important: Disruptions closer to the tips of the tree have a greater effect than do disruptions that are close to the root of the tree. Independent contrasts derived from inaccurate topologies are more likely to lead to erroneous conclusions when there is a true significant relationship between the variables being tested (i.e., they tend to be conservative). The results also suggest that random phylogenies perform no better than nonphylogenetic analyses and, under certain conditions, may perform even worse than analyses using raw species data. Therefore, the use of random phylogenies is not beneficial in the absence of knowledge of the true phylogeny.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Body image research with young children has typically examined their body satisfaction and overlooked developmental theories pertaining to their emergent body-knowledge. Though existing research suggests that preschoolers do demonstrate anti-fat attitudes and weight-related stigmatisation, body dissatisfaction can be difficult to assess in preschoolers due to developmental differences in their (i) ability to perceive their actual body size accurately and (ii) make comparisons with a hypothetical ideal. We review current findings on the attitudinal component of body image in preschoolers, together with findings on the accuracy of their body size perceptions and their emergent body awareness abilities. Such an integration of the cognitive development literature is key to identifying when and how young children understand their physical size and shape; this in turn is critical for informing methodological design targeted at assessing body dissatisfaction and anti-fat attitudes in early childhood.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO−3, Cl−, PO3−4) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed (one-way ANOVA, p < 0.001 indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A question frequently asked in multi-agent systems (MASs) concerns the efficient search for suitable agents to solve a specific problem. To answer this question, different types of middle agents are usually employed. The performance of middle agents relies heavily on the matchmaking algorithms used. Matchmaking is the process of finding an appropriate provider for a requester through a middle agent. There has been substantial work on matchmaking in different kinds of middle agents. To our knowledge, almost all currently used matchmaking algorithms missed one point when doing matchmaking -- the matchmaking is only based on the advertised capabilities of provider agents. The actual performance of provider agents in accomplishing delegated tasks is not considered at all. This results in the inaccuracy of the matchmaking outcomes as well as the random selection of provider agents with the same advertised capabilities. The quality of service of different service provider agents varies from one agent to another even though they claimed they have the same capabilities. To this end, it is argued that the practical performance of service provider agents has a significant impact on the matchmaking outcomes of middle agents. An improvement to matchmaking algorithms is proposed, which makes the algorithms have the ability to consider the track records of agents in accomplishing delegated tasks. How to represent, accumulate, and use track records as well as how to give initial values for track records in the algorithm are discussed. A prototype is also built to verify the algorithm. Based on the improved algorithm, the matchmaking outcomes are more accurate and reasonable.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the development of a web-based information system such as a demolition material management system, a great amount of diversified information on projects should be acquired from particular users located with various computer platforms. This issue is difficult to handle using the limited HTTP form submission, which could lead to inaccuracy of the information and inefficiency of the whole system. This paper describes a web-based graphical user interfaced, dynamic and distributed multimedia data acquisition mechanism, which accepts users' drawings and retrieval information from the canvas and stores the multimedia data on a server for further usages. Furthermore, techniques and principles needed to construct such a multimedia data acquisition tool are addressed in detail. The application of this distributed multimedia tool in developing a web-based demolition material management system is also described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an independent evaluation of six recent hidden Markov model (HMM) genefinders. Each was tested on the new dataset (FSH298), the results of which showed no dramatic improvement over the genefinders tested five years ago. In addition, we introduce a comprehensive taxonomy of predicted exons and classify each resulting exon accordingly. These results are useful in measuring (with finer granularity) the effects of changes in a genefinder. We present an analysis of these results and identify four patterns of inaccuracy common in all HMM-based results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an increasing realisation of the importance of community or volunteer collected data for management programs that are otherwise limited by the availability of funds or resources. However, there are concerns regarding the reliability of scientific data collected by inexperienced people. We investigated the potential for community-based monitoring in Victoria’s newly established system of Marine Protected Areas. The main objectives for the study were to 1) develop a template for the scientific monitoring of marine habitats suitable for community groups, 2) assess data quality and data integrity collected by community volunteers and 3) determine a sustainable model for ongoing community participation in monitoring marine habitats. Three different habitats (subtidal, intertidal, and seagrass) were investigated and data collected by volunteers across these habitats was compared to that collected by scientists. Reliability of data collected by volunteers was dependent on habitat type and the type of measurement the volunteers were required to make. Qualitative estimates made by volunteers were highly variable across all three habitat sites, compared to quantitative data collection. Subtidal monitoring had the greatest inaccuracy for data collection, whereas intertidal reef monitoring was most reliable. Sustainability of community-based monitoring programs is dependent on adequate training for volunteers and the development of partnerships to foster greater community engagement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of the literature established that localization acuity measured during monaural listening conditions was directly related to various methodological considerations. These included method of attenuation, segment of auditory space where monaural localization was measured, and the presence or absence of head movements. An extensive measurement of monaural localization was made with due consideration of these factors, allowing a more comprehensive evaluation of monaural acuity and the underlying processes that were involved. Establishing a monaural condition is dependent both on the attenuation level of the occluded ear and the signal level, both of which are clearly inter-related since the attenuation level of the occluded ear sets the maximum level of die stimulus. In a series of experiments it was established that there was a minimum signal level for accurate localization. Testing on both sides of the head revealed that there were three regions of monaural localization acuity. The first was about the interaural axis on the ipsilateral ear where monaural localization was relatively accurate, the second a region either side of the MSP where there was some loss of localization, and a third about the interaural axis on the ipsilateral side where virtually no monaural localization ability existed. In the final series of experiments it was established that head-movements allowed subjects to extend the accuracy of the first region by minimizing the distance between the sound and the ipsilateral interaural axis, thus compensating for the loss of localization ability in the second and third regions. This was determined from changes recorded in the error data, and also the extent and direction of measured head-movements. The results of this series of experiments demonstrated the relationship between spectral cues and monaural localization. Firstly, monaural localization was not possible in the absence of accurate spectral information. Thus large errors were observed in the third region where there was blockage of the high-frequencies by the head, and in all regions during the presentation of low signal levels where the high-frequencies fell below threshold. Secondly, the inaccuracy of the second region due to the loss of information from the second pinna suggested that there was a binaural component with relation to pinna cues. It seems that for sounds in this region the spectral modifications from both pinnae are processed to determine a sound's location in space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims To present the ADKnowl measure of diabetes-related knowledge and evaluate its use in identifying the nature and extent of patient and health professional knowledge deficits.

Method The ADKnowl was used in a large-scale study of 789 patients (451 treated with insulin and 338 treated with tablets and/or diet) attending for annual review at one of two hospital out-patient diabetes clinics

Results Knowledge deficits were apparent in the patients. For example, 57% did not recognize the inaccuracy of the statement `fresh fruit can be eaten freely with little effect on blood glucose levels'. Seventy-five percent of patients did not know that it is advisable to trim toenails to the shape of the toe. Knowledge deficits were identified for many other areas of diabetes management, e.g. prevention of hypoglycaemia, avoidance of ketoacidosis. Sixteen health professionals at the clinics answered the same items. Contrary to recommendations, 25% of health professionals thought that fresh fruit could be eaten freely. Seventy-five percent of health professionals did not know the current recommendations for trimming toenails. As expected, HbA1c did correlate with scores from two specific items, while HbA1c did not correlate with summed ADKnowl score.

Conclusions Patient knowledge deficits were identified. Some specific knowledge deficits among health professionals may be the cause of some patient knowledge deficits. The ADKnowl is a useful tool in assessing both patient and health professional knowledge deficits and is available for use in a context of continuing evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty of data affects decision making process as it increases the risk and the costs of the decision. One of the challenges in minimizing the impact of the bounded uncertainty on any scheduling algorithm is the lack of information, as only the upper bound and the lower bound are provided without any known probability or membership function. On the contrary, probabilistic uncertainty can use probability distributions and fuzzy uncertainty can use the membership function. McNaughton's algorithm is used to find the optimum schedule that minimizes the makespan taking into consideration the preemption of tasks. The challenge here is the bounded inaccuracy of the input parameters for the algorithm, namely known as bounded uncertain data. This research uses interval programming to minimise the impact of bounded uncertainty of input parameters on McNaughton’s algorithm, it minimises the uncertainty of the cost function estimate and increase its optimality. This research is based on the hypothesis that doing the calculations on interval values then approximate the end result will produce more accurate results than approximating each interval input then doing numerical calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The Diabetes Education and Self-Management for Ongoing and Newly Diagnosed (DESMOND) Self-monitoring Trial reported that people with newly diagnosed type 2 diabetes attending community-based structured education and randomized to self-monitoring of blood glucose (SMBG) or urine monitoring had comparable improvements in biomedical outcomes, but differences in satisfaction with, and continued use of monitoring method, well-being and perceived threat from diabetes. OBJECTIVES: To explore experiences of SMBG and urine monitoring following structured education. We specifically addressed the perceived usefulness of each monitoring method and the associated well-being. METHODS: Qualitative semi-structured interviews with 18 adults with newly diagnosed type 2 diabetes participating in the DESMOND Self-monitoring Trial (SMBG, N=10; urine monitoring, N=8)~12 months into the trial. Analysis was informed by the constant comparative approach. RESULTS: Interviewees reported SMBG as accurate, convenient and useful. Declining use was explained by having established a pattern of managing blood glucose with less frequent monitoring or lack of feedback or encouragement from health care professionals. Many initially positive views of urine monitoring progressively changed due to perceived inaccuracy, leading some to switch to SMBG. Perceiving diabetes as less serious was attributable to lack of symptoms, treatment with diet alone and-in the urine-monitoring group-consistently negative readings. Urine monitoring also provided less visible evidence of diabetes and of the effect of behaviour on glucose. CONCLUSIONS: The findings highlight the importance for professionals of considering patients' preferences when using self-monitoring technologies, including how these change over time, when supporting the self-care behaviours of people with type 2 diabetes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many scientific workflows are data intensive where large volumes of intermediate data are generated during their execution. Some valuable intermediate data need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science in the cloud has become popular nowadays, more intermediate data can be stored in scientific cloud workflows based on a pay-for-use model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenance in scientific workflows. With the IDG, deleted intermediate data can be regenerated, and as such we develop a novel intermediate data storage strategy that can reduce the cost of scientific cloud workflow systems by automatically storing appropriate intermediate data sets with one cloud service provider. The strategy has significant research merits, i.e. it achieves a cost-effective trade-off of computation cost and storage cost and is not strongly impacted by the forecasting inaccuracy of data sets' usages. Meanwhile, the strategy also takes the users' tolerance of data accessing delay into consideration. We utilize Amazon's cost model and apply the strategy to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that our strategy can reduce the overall cost of scientific cloud workflow execution significantly.