990 resultados para Statistics Support


Relevância:

80.00% 80.00%

Publicador:

Resumo:

With rapid and continuing growth of learning support initiatives in mathematics and statistics found in many parts of the world, and with the likelihood that this trend will continue, there is a need to ensure that robust and coherent measures are in place to evaluate the effectiveness of these initiatives. The nature of learning support brings challenges for measurement and analysis of its effects. After briefly reviewing the purpose, rationale for, and extent of current provision, this article provides a framework for those working in learning support to think about how their efforts can be evaluated. It provides references and specific examples of how workers in this field are collecting, analysing and reporting their findings. The framework is used to structure evaluation in terms of usage of facilities, resources and services provided, and also in terms of improvements in performance of the students and staff who engage with them. Very recent developments have started to address the effects of learning support on the development of deeper approaches to learning, the affective domain and the development of communities of practice of both learners and teachers. This article intends to be a stimulus to those who work in mathematics and statistics support to gather even richer, more valuable, forms of data. It provides a 'toolkit' for those interested in evaluation of learning support and closes by referring to an on-line resource being developed to archive the growing body of evidence. © 2011 Taylor & Francis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The support vector machine (SVM) has played an important role in bringing certain themes to the fore in computationally oriented statistics. However, it is important to place the SVM in context as but one member of a class of closely related algorithms for nonlinear classification. As we discuss, several of the “open problems” identified by the authors have in fact been the subject of a significant literature, a literature that may have been missed because it has been aimed not only at the SVM but at a broader family of algorithms. Keeping the broader class of algorithms in mind also helps to make clear that the SVM involves certain specific algorithmic choices, some of which have favorable consequences and others of which have unfavorable consequences—both in theory and in practice. The broader context helps to clarify the ties of the SVM to the surrounding statistical literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To evaluate the influence of three different adhesives, each used as an intermediary layer, on microleakage of sealants applied under condition of salivary contamination. Materials and Methods: Six different experimental conditions were compared, 3 with adhesives and 3 without. After prophylaxis and acid etching of enamel, salivary contamination was placed for 10 s. In Group SC the sealant was applied after saliva without bonding agent and then light-cured. In Group SCA, after saliva, the surface was air dried, and then the sealant was applied and cured. In Groups ScB, SB and PB, a bonding agent (Scotchbond Dual Cure/3M, Single Bond/3M and Prime & Bond 2.1/Dentsply, respectively) was applied after the saliva and prior to the sealant application and curing. After storage in distilled water at 37°C for 24 hrs, the teeth were submitted to 500 thermal cycles (5°C and 55°C), and silver nitrate was used as a leakage tracer. Leakage data were collected on cross sections as percentage of total enamel-sealant interface length. Representative samples were evaluated under SEM. Results: Sealants placed on contaminated enamel with no bonding agent showed extensive microleakage (94.27% in SC; 42.65% in SCA). The SEM revealed gaps as wide as 20 μm in areas where silver nitrate leakage could be visualized. In contrast, all bonding agent groups showed leakage less than 6.9%. Placement of sealant with a dentin-bonding agent on contaminated enamel significantly reduced microleakage (P< 0.0001). The use of a bonding agent as an intermediary layer between enamel and sealant significantly reduced saliva's effect on sealant microleakage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

On December 17 came into force on community standard marine fuels. Bunker prices are expected to increase, recent statistics support this argument and the difference between the high sulphur (HS) and the low sulphur (LS) marine bunkers will be sustained. Considering also the price difference between the basis-market of Rotterdam with the rest European ports, the expected bunker prices will be higher in the Mediterranean. This paper begins with a review of the current situation in ECAS areas, highlighting the rules to be implemented shortly. The aim of the paper is known the current situation bunkering determine the estimated short term in Spain from world fleet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) is a large multidisciplinary university located in Brisbane, Queensland, Australia. QUT is increasing its research focus and is developing its research support services. It has adopted a model of collaboration between the Library, High Performance Computing and Research Support (HPC) and more broadly with Information Technology Services (ITS). Research support services provided by the Library include the provision of information resources and discovery services, bibliographic management software, assistance with publishing (publishing strategies, identifying high impact journals, dealing with publishers and the peer review process), citation analysis and calculating authors’ H Index. Research data management services are being developed by the Library and HPC working in collaboration. The HPC group within ITS supports research computing infrastructure, research development and engagement activities, researcher consultation, high speed computation and data storage systems , 2D/ 3D (immersive) visualisation tools, parallelisation and optimization of research codes, statistics/ data modeling training and support (both qualitative and quantitative) and support for the university’s central Access Grid collaboration facility. Development and engagement activities include participation in research grants and papers, student supervision and internships and the sponsorship, incubation and adoption of new computing technologies for research. ITS also provides other services that support research including ICT training, research infrastructure (networking, data storage, federated access and authorization, virtualization) and corporate systems for research administration. Seminars and workshops are offered to increase awareness and uptake of new and existing services. A series of online surveys on eResearch practices and skills and a number of focus groups was conducted to better inform the development of research support services. Progress towards the provision of research support is described within the context organizational frameworks; resourcing; infrastructure; integration; collaboration; change management; engagement; awareness and skills; new services; and leadership. Challenges to be addressed include the need to redeploy existing operational resources toward new research support services, supporting a rapidly growing research profile across the university, the growing need for the use and support of IT in research programs, finding capacity to address the diverse research support needs across the disciplines, operationalising new research support services following their implementation in project mode, embedding new specialist staff roles, cross-skilling Liaison Librarians, and ensuring continued collaboration between stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistics presented in Australia Council reports such as Don’t Give Up Your Day Job (2003), and Artswork: A Report On Australians Working in the Arts 1 and 2 (1997, 2005), and in other studies on destinations for Performing Arts graduates, demonstrate the diversity of post-graduation pathways for our students, the prevalence of protean careers, and the challenges in developing a sense of professional identity in a context where a portfolio of work across performance making, producing, administration and teaching can make it difficult for young artists to establish career status and capital in conventional terms (cf. Dawn Bennett, “Academy and the Real World: Developing Realistic Notions of Career in the Performing Arts”, Arts & Humanities in Higher Education, 8.3, 2009). In this panel, academics from around Australia will consider the ways in which Drama, Theatre and Performance Studies as a discipline is deploying a variety of practical, professional and work-integrated teaching and learning activities – including performance-making projects, industry projects, industry placements and student-initiated projects – to connect students with the networks, industries and professional pathways that will support their progression into their career. The panellists include Bree Hadley (Queensland University of Technology), Meredith Rogers (La Trobe University), Janys Hayes (Woolongong University) and Teresa Izzard (Curtin University). The panelists will present insights into the activities they have found successful, and address a range of questions, including: How do we introduce students to performance-making and / or producing models they will be able to employ in their future practice, particularly in light of the increasingly limited funds, time and resources available to support students’ participation in full-scale productions under the stewardship of professional artists?; How and when do we introduce students to industry networks?; How do we cater for graduates who will work as performers, writers, directors or administrators in the non-subsidised sector, the subsidised sector, community arts and education?; How do we category cater for graduates who will go on to pursue their work in a practice-as-research context in a Higher Degree?; How do we assist graduates in developing a professional identity? How do we assist graduates in developing physical, professional and personal resilience?; How do we retain our connections with graduates as part of their life-long learning?; Do practices and processes need to differ for city or regionally based / theoretically or practically based degree programs?; How do our teaching and learning activities align with emergent policy and industrial frameworks such as the shift to the “Producer Model” in Performing Arts funding, or the new mentorship, project, production and enterprise development opportunities under the Australia Council for the Arts’ new Opportunities for Young and Emerging Artists policy framework?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expert elicitation is the process of determining what expert knowledge is relevant to support a quantitative analysis and then eliciting this information in a form that supports analysis or decision-making. The credibility of the overall analysis, therefore, relies on the credibility of the elicited knowledge. This, in turn, is determined by the rigor of the design and execution of the elicitation methodology, as well as by its clear communication to ensure transparency and repeatability. It is difficult to establish rigor when the elicitation methods are not documented, as often occurs in ecological research. In this chapter, we describe software that can be combined with a well-structured elicitation process to improve the rigor of expert elicitation and documentation of the results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Good management, supported by accurate, timely and reliable health information, is vital for increasing the effectiveness of Health Information Systems (HIS). When it comes to managing the under resourced health systems of developing countries, information-based decision making is particularly important. This paper reports findings of a self-report survey that investigated perceptions of local health managers (HMs) of their own regional HIS in Sri Lanka. Data were collected through a validated, pre-tested postal questionnaire, and distributed among a selected group of HMs to elicit their perceptions of the current HIS in relation to information generation, acquisition and use, required reforms to the information system and application of information and communication technology (ICT). Results based on descriptive statistics indicated that the regional HIS was poorly organised and in need of reform; that management support for the system was unsatisfactory in terms of relevance, accuracy, timeliness and accessibility; that political pressure and community and donor requests took precedence over vital health information when management decisions were made; and use of ICT was unsatisfactory. HIS strengths included user-friendly paper formats, a centralised planning system and an efficient disease notification system; weaknesses were lack of comprehensiveness, inaccuracy, and lack of a feedback system. Responses of participants indicated that HIS would be improved by adopting an internationally accepted framework and introducing ICT applications. Perceived barriers to such improvements were high initial cost of educating staff to improve computer literacy, introduction of ICTs, and HIS restructure. We concluded that the regional HIS of Central Province, Sri Lanka had failed to provide much needed information support to HMs. These findings are consistent with similar research in other developing countries and reinforce the need for further research to verify causes of poor performance and to design strategic reforms to improve HIS in regional Sri Lanka.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to social constructivists, learners are active participants in constructing new knowledge in a social process where they interact with others. In these social settings teachers or more knowledgeable peers provide support. This research study investigated the contribution that an online synchronous tutorial makes to support teaching and learning of undergraduate introductory statistics offered by an Australian regional university at a distance. The introductory statistics course which served as a research setting in this study was a requirement of a variety of programs at the University, including psychology, business and science. Often students in these programs perceive this course to be difficult and irrelevant to their programs of study. Negative attitudes and associated anxiety mean that students often struggle with the content. While asynchronous discussion forums have been shown to provide a level of interaction and support, it was anticipated that online synchronous tutorials would offer immediate feedback to move students forward through ―stuck places.‖ At the beginning of the semester the researcher offered distance students in this course the opportunity to participate in a weekly online synchronous tutorial which was an addition to the usual support offered by the teaching team. This tutorial was restricted to 12 volunteers to allow sufficient interaction to occur for each of the participants. The researcher, as participant-observer, conducted the weekly tutorials using the University's interactive online learning platform, Wimba Classroom, whereby participants interacted using audio, text chat and a virtual whiteboard. Prior to the start of semester, participants were surveyed about their previous mathematical experiences, their perceptions of the introductory statistics course and why they wanted to participate in the online tutorial. During the semester, they were regularly asked pertinent research questions related to their personal outcomes from the tutorial sessions. These sessions were recorded using screen capture software and the participants were interviewed about their experiences at the end of the semester. Analysis of these data indicated that the perceived value of online synchronous tutorial lies in the interaction with fellow students and a content expert and with the immediacy of feedback given. The collaborative learning environment offered the support required to maintain motivation, enhance confidence and develop problemsolving skills in these distance students of introductory statistics. Based on these findings a model of online synchronous learning is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer vision is increasingly becoming interested in the rapid estimation of object detectors. The canonical strategy of using Hard Negative Mining to train a Support Vector Machine is slow, since the large negative set must be traversed at least once per detector. Recent work has demonstrated that, with an assumption of signal stationarity, Linear Discriminant Analysis is able to learn comparable detectors without ever revisiting the negative set. Even with this insight, the time to learn a detector can still be on the order of minutes. Correlation filters, on the other hand, can produce a detector in under a second. However, this involves the unnatural assumption that the statistics are periodic, and requires the negative set to be re-sampled per detector size. These two methods differ chie y in the structure which they impose on the co- variance matrix of all examples. This paper is a comparative study which develops techniques (i) to assume periodic statistics without needing to revisit the negative set and (ii) to accelerate the estimation of detectors with aperiodic statistics. It is experimentally verified that periodicity is detrimental.