925 resultados para Biases
Resumo:
The distribution of flux of carbon-bearing cations over nanopatterned surfaces with conductive nanotips and nonconductive nanoislands is simulated using the Monte-Carlo technique. It is shown that the ion current is focused to nanotip surfaces when the negative substrate bias is low and only slightly perturbed at higher substrate biases. In the low-bias case, the mean horizontal ion displacement caused by the nanotip electric field exceeds 10 nm. However, at higher substrate biases, this value reduces down to 2 nm. In the nonconductive nanopattern case, the ion current distribution is highly nonuniform, with distinctive zones of depleted current density around the nanoislands. The simulation results suggest the efficient means to control ion fluxes in plasma-aided nanofabrication of ordered nanopatterns, such as nanotip microemitter structures and quantum dot or nanoparticle arrays. © World Scientific Publishing Company.
Resumo:
NLS is a stream cipher which was submitted to the eSTREAM project. A linear distinguishing attack against NLS was presented by Cho and Pieprzyk, which was called Crossword Puzzle (CP) attack. NLSv2 is a tweak version of NLS which aims mainly at avoiding the CP attack. In this paper, a new distinguishing attack against NLSv2 is presented. The attack exploits high correlation amongst neighboring bits of the cipher. The paper first shows that the modular addition preserves pairwise correlations as demonstrated by existence of linear approximations with large biases. Next, it shows how to combine these results with the existence of high correlation between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around 2^−37. Consequently, we claim that NLSv2 is distinguishable from a random cipher after observing around 2^74 keystream words.
Resumo:
Urban areas are growing unsustainably around the world; however, the growth patterns and their associated drivers vary between contexts. As a result, research has highlighted the need to adopt case study based approaches to stimulate the development of new theoretic understandings. Using land-cover data sets derived from Landsat images (30 m × 30 m), this research identifies both patterns and drivers of urban growth in a period (1991-2001) when a number of policy acts were enacted aimed at fostering smart growth in Brisbane, Australia. A linear multiple regression model was estimated using the proportion of lands that were converted from non-built-up (1991) to built-up usage (2001) within a suburb as a dependent variable to identify significant drivers of land-cover changes. In addition, the hot spot analysis was conducted to identify spatial biases of land-cover changes, if any. Results show that the built-up areas increased by 1.34% every year. About 19.56% of the non-built-up lands in 1991 were converted into built-up lands in 2001. This conversion pattern was significantly biased in the northernmost and southernmost suburbs in the city. This is due to the fact that, as evident from the regression analysis, these suburbs experienced a higher rate of population growth, and had the availability of habitable green field sites in relatively flat lands. The above findings suggest that the policy interventions undertaken between the periods were not as effective in promoting sustainable changes in the environment as they were aimed for.
Resumo:
In explaining how communication quality predicts TMS in multidisciplinary teams, we drew on the social identity approach to investigate the mediating role of team identification and the moderating role of professional identification. Recognizing that professional identification could trigger intergroup biases among professional subgroups, or alternatively, could bring resources to the team, we explored the potential moderating role of professional identification in the relationship between team identification and TMS. Using data collected from 882 healthcare personnel working in 126 multidisciplinary hospital teams, results supported our hypothesis that perceived communication quality predicted TMS through team identification. Furthermore, findings provided support for a resource view of professional subgroup identities with results indicating that high levels of professional identification compensated for low levels of team identification in predicting TMS. We provide recommendations on how social identities may be used to promote TMS in multidisciplinary teams.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Automated remote ultrasound detectors allow large amounts of data on bat presence and activity to be collected. Processing of such data involves identifying bat species from their echolocation calls. Automated species identification has the potential to provide more consistent, predictable, and potentially higher levels of accuracy than identification by humans. In contrast, identification by humans permits flexibility and intelligence in identification, as well as the incorporation of features and patterns that may be difficult to quantify. We compared humans with artificial neural networks (ANNs) in their ability to classify short recordings of bat echolocation calls of variable signal to noise ratios; these sequences are typical of those obtained from remote automated recording systems that are often used in large-scale ecological studies. We presented 45 recordings (1–4 calls) produced by known species of bats to ANNs and to 26 human participants with 1 month to 23 years of experience in acoustic identification of bats. Humans correctly classified 86% of recordings to genus and 56% to species; ANNs correctly identified 92% and 62%, respectively. There was no significant difference between the performance of ANNs and that of humans, but ANNs performed better than about 75% of humans. There was little relationship between the experience of the human participants and their classification rate. However, humans with <1 year of experience performed worse than others. Currently, identification of bat echolocation calls by humans is suitable for ecological research, after careful consideration of biases. However, improvements to ANNs and the data that they are trained on may in future increase their performance to beyond those demonstrated by humans.
Resumo:
The goal of this study was to describe researchers' experiences in submitting ethical proposals focused on older adult populations, including studies with persons with dementia, to ethical review boards. Ethical approval was granted for an online survey. Researchers were recruited via listservs and snowballing techniques. Participants included 157 persons (73% female) from Australia and the United States, with a mean age of 46 (±13). Six main issues were encountered by researchers who participated in this survey. In descending order, these included questions regarding: informed consent and information requirements (61.1%), participants' vulnerability, particularly for those with cognitive impairments (58.6%), participant burden (44.6%), data access (29.3%), adverse effects of data collection/intervention (26.8%), and study methodology (25.5%). An inductive content analysis of responses revealed a range of encounters with ethical review panels spanning positive, negative, and neutral experiences. Concerns voiced about ethical review boards included committees being overly focused on legal risk, as well as not always hearing the voice of older research participants, both potential and actual. Respondents noted inability to move forward on studies, as well as loss of researchers and participant groups from gerontological and clinical research as a result of negative interactions with ethics committees. Positive interactions with the committees reinforced researchers' need to carefully construct their research approaches with persons with dementia in particular. Suggested guidelines for committees when dealing with ethics applications involving older adults include self-reflecting on potential biases and stereotypes, and seeking further clarification and information from gerontological researchers before arriving at decisions.
Resumo:
In recent years, there has been a rise in the number of people seeking asylum in Australia, resulting in over-crowded detention centres in various parts of the country. Appropriate management and assistance of asylum seekers has been an issue of major socio-political concern. In mid-2012, the Australian ruling government introduced a ‘first of its kind’ community placement initiative, which involved relocating low-risk asylum seekers from detention centres to homes of those Australian families who volunteered for this program. The present study investigated host families’ motivations for volunteering into this scheme and their resulting experiences. Twenty-four men and women from all over Australia were interviewed in person or over the telephone. Consistent with theoretical frameworks of altruism, acculturation, and intergroup contact, thematic analysis indicated participants’ interest in diversity/humanitarian issues were major factors that motivated them to host asylum seekers. Language and cultural barriers were reported as challenges, but generally, participants found the experience positive and rewarding. The initiative was regarded as an excellent avenue of learning about new cultures. The hosts played a strong role in promoting the English language proficiency and intercultural settlement of the asylum seekers. The scheme was considered as one way of diffusing fear/biases against asylum seekers prevalent amongst the Australian community at-large. Participants also provided suggestions to improve the scheme.
Resumo:
Alignment-free methods, in which shared properties of sub-sequences (e.g. identity or match length) are extracted and used to compute a distance matrix, have recently been explored for phylogenetic inference. However, the scalability and robustness of these methods to key evolutionary processes remain to be investigated. Here, using simulated sequence sets of various sizes in both nucleotides and amino acids, we systematically assess the accuracy of phylogenetic inference using an alignment-free approach, based on D2 statistics, under different evolutionary scenarios. We find that compared to a multiple sequence alignment approach, D2 methods are more robust against among-site rate heterogeneity, compositional biases, genetic rearrangements and insertions/deletions, but are more sensitive to recent sequence divergence and sequence truncation. Across diverse empirical datasets, the alignment-free methods perform well for sequences sharing low divergence, at greater computation speed. Our findings provide strong evidence for the scalability and the potential use of alignment-free methods in large-scale phylogenomics.
Resumo:
The 2010 biodiversity target agreed by signatories to the Convention on Biological Diversity directed the attention of conservation professionals toward the development of indicators with which to measure changes in biological diversity at the global scale. We considered why global biodiversity indicators are needed, what characteristics successful global indicators have, and how existing indicators perform. Because monitoring could absorb a large proportion of funds available for conservation, we believe indicators should be linked explicitly to monitoring objectives and decisions about which monitoring schemes deserve funding should be informed by predictions of the value of such schemes to decision making. We suggest that raising awareness among the public and policy makers, auditing management actions, and informing policy choices are the most important global monitoring objectives. Using four well-developed indicators of biological diversity (extent of forests, coverage of protected areas, Living Planet Index, Red List Index) as examples, we analyzed the characteristics needed for indicators to meet these objectives. We recommend that conservation professionals improve on existing indicators by eliminating spatial biases in data availability, fill gaps in information about ecosystems other than forests, and improve understanding of the way indicators respond to policy changes. Monitoring is not an end in itself, and we believe it is vital that the ultimate objectives of global monitoring of biological diversity inform development of new indicators. ©2010 Society for Conservation Biology.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
A research protocol for our prospective study of research funding. How much research funding improves research productivity is a question that has relevance for all funding agencies and governments around the world. Previous studies have used observational data that compares productivity between winners of different amounts of funding, but researchers who win lots of funding are usually very different from those who win little or no funding. This difference creates potentially serious confounding which biases any estimate of the effect of funding based on observational data that simply compares research output for those who did and did not win funding. This means we do not currently know the return on investment for our research dollars, of which billions are invested around the world every year. By using a study design that incorporates randomisation this will be the world’s first unbiased study of the impact of researcher funding.
Resumo:
Experts are increasingly being called upon to quantify their knowledge, particularly in situations where data is not yet available or of limited relevance. In many cases this involves asking experts to estimate probabilities. For example experts, in ecology or related fields, might be called upon to estimate probabilities of incidence or abundance of species, and how they relate to environmental factors. Although many ecologists undergo some training in statistics at undergraduate and postgraduate levels, this does not necessarily focus on interpretations of probabilities. More accurate elicitation can be obtained by training experts prior to elicitation, and if necessary tailoring elicitation to address the expert’s strengths and weaknesses. Here we address the first step of diagnosing conceptual understanding of probabilities. We refer to the psychological literature which identifies several common biases or fallacies that arise during elicitation. These form the basis for developing a diagnostic questionnaire, as a tool for supporting accurate elicitation, particularly when several experts or elicitors are involved. We report on a qualitative assessment of results from a pilot of this questionnaire. These results raise several implications for training experts, not only prior to elicitation, but more strategically by targeting them whilst still undergraduate or postgraduate students.
Resumo:
Objectives Our overarching objective is to demonstrate the political contradictions about about how persuasive texts should be taught in the middle years of schooling, analysing two contradictory Australian wide educational reforms. We consider the complexities of power and access to literacy for students in relation to these reforms about the privileged genre of persuasion. Our work is framed by our appreciation of literacy as a social justice issue, and the notion of students’ pedagogic rights (Bernstein, 2000). Specifically, we introduce and analyse the knowledge and skills about persuasive text sanctioned by the Australian high-stakes test, the National Assessment Program for Literacy and Numeracy (NAPLAN), for students in the middle years of schooling (ACARA, 2013). We compare this to the contemporary emphasis on multimodal persuasive texts sanctioned by the recently released Australian Curriculum English (ACARA, 2014). We conclude our analysis by identifying biases in the structure of particular knowledges and the inherent threats to democracy.
Resumo:
In this paper, we assess whether quality survives the test of time in academia by comparing up to 80 years of academic journal article citations from two top journals, Econometrica and the American Economic Review. The research setting under analysis is analogous to a controlled real world experiment in that it involves a homogeneous task (trying to publish in top journals) by individuals with a homogenous job profile (academics) in a specific research environment (economics and econometrics). Comparing articles published concurrently in the same outlet at the same time (same issue) indicates that symbolic capital or power due to institutional affiliation or connection does seem to boost citation success at the beginning, giving those educated at or affiliated with leading universities an initial comparative advantage. Such advantage, however, does not hold in the long run: at a later stage,the publications of other researchers become as or even more successful.