847 resultados para Combined Web crippling and Flange Crushing
Resumo:
Background and purpose: Acetabular impaction grafting has been shown to have excellent results, but concerns regarding its suitability for larger defects have been highlighted. We report the use of this technique in a large cohort of patients with the aim of better understanding the limitations of the technique. Methods: We investigated a consecutive group of 339 cases of impaction grafting of the cup with morcellised impacted allograft bone for survivorship and mechanisms for early failure. Results: Kaplan Meier survival was 89.1% (95% CI 83.2 to 95.0%) at 5.8 years for revision for any reason, and 91.6% (95% CI 85.9 to 97.3%) for revision for aseptic loosening of the cup. Of the 15 cases revised for aseptic cup loosening, nine were large rim mesh reconstructions, two were fractured Kerboull-Postel plates, two were migrating cages, one medial wall mesh failure and one impaction alone failed. Interpretation: In our series, results were disappointing where a large rim mesh or significant reconstruction was required. In light of these results, our technique has changed in that we now use predominantly larger chips of purely cancellous bone, 8-10 mm3 in size, to fill the cavity and larger diameter cups to better fill the mouth of the reconstructed acetabulum. In addition we now make greater use of i) implants made of a highly porous in-growth surface to constrain allograft chips and ii) bulk allografts combined with cages and morcellised chips in cases with very large segmental and cavitary defects.
Resumo:
Re-evaluation of pedagogical practice is driving learning design at Queensland University of Technology. One objective is to support approaches to increase student engagement and attendance in physical and virtual learning spaces through opportunities for active and problem-based learning. This paper provides an overview and preliminary evaluation of the pilot of one of these initiatives, the Open Web Lecture (OWL), a new web-based student response application that seamlessly integrates a virtual learning environment within a physical learning space.
Resumo:
Purpose – Interactive information retrieval (IR) involves many human cognitive shifts at different information behaviour levels. Cognitive science defines a cognitive shift or shift in cognitive focus as triggered by the brain's response and change due to some external force. This paper aims to provide an explication of the concept of “cognitive shift” and then report results from a study replicating Spink's study of cognitive shifts during interactive IR. This work aims to generate promising insights into aspects of cognitive shifts during interactive IR and a new IR evaluation measure – information problem shift. Design/methodology/approach – The study participants (n=9) conducted an online search on an in-depth personal medical information problem. Data analysed included the pre- and post-search questionnaires completed by each study participant. Implications for web services and further research are discussed. Findings – Key findings replicated the results in Spink's study, including: all study participants reported some level of cognitive shift in their information problem, information seeking and personal knowledge due to their search interaction; and different study participants reported different levels of cognitive shift. Some study participants reported major cognitive shifts in various user-based variables such as information problem or information-seeking stage. Unlike Spink's study, no participant experienced a negative shift in their information problem stage or level of information problem understanding. Originality/value – This study builds on the previous study by Spink using a different dataset. The paper provides valuable insights for further research into cognitive shifts during interactive IR.
Resumo:
The report card for the introductory programming unit at our university has historically been unremarkable in terms of attendance rates, student success rates and student retention in both the unit and the degree course. After a course restructure recently involving a fresh approach to introducing programming, we reported a high retention in the unit, with consistently high attendance and a very low failure rate. Following those encouraging results, we collected student attendance data for several semesters and compared attendance rates to student results. We have found that interesting workshop material which directly relates to course-relevant assessment items and therefore drives the learning, in an engaging collaborative learning environment has improved attendance to an extraordinary extent, with student failure rates plummeting to the lowest in recorded history at our university.
Resumo:
In this paper, three metaheuristics are proposed for solving a class of job shop, open shop, and mixed shop scheduling problems. We evaluate the performance of the proposed algorithms by means of a set of Lawrence’s benchmark instances for the job shop problem, a set of randomly generated instances for the open shop problem, and a combined job shop and open shop test data for the mixed shop problem. The computational results show that the proposed algorithms perform extremely well on all these three types of shop scheduling problems. The results also reveal that the mixed shop problem is relatively easier to solve than the job shop problem due to the fact that the scheduling procedure becomes more flexible by the inclusion of more open shop jobs in the mixed shop.
Resumo:
This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.
Resumo:
Taxonometrical uncertainty is prevalent in the field of locative media, which has been variously referred to as “the geomobile web” (Crawford and Goggin, 2009), “the geoweb” (Lake et al., 2004), “Where 2.0” (O’Reilly, 2008:1), and “DigiPlace” (Zook and Graham, 2007). However, it is not only the rapid development of the technology, or the various academic disciplinary approaches to it, that have resulted in this uncertainty but also the deeply ideological debates and concerns about what locative media should and should not be. The intention of this article is to provide an overview of existing literature and research in this field in order to develop a synthetic overview of the various types of locative media, and the geographies arising from them. Not only will such taxonomy clarify communication about locative media, it will identify for developers, users, policy-makers and scholars the specific contours and affordances of the different types of locative media, as well as the issues associated with them.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
Young drivers are overrepresented in motor vehicle crash rates, and their risk increases when carrying similar aged passengers. Graduated Driver Licensing strategies have demonstrated effectiveness in reducing fatalities among young drivers, however complementary approaches may further reduce crash rates. Previous studies conducted by the researchers have shown that there is considerable potential for a passenger focus in youth road safety interventions, particularly involving the encouragement of young passengers to intervene in their peers’ risky driving (Buckley, Chapman, Sheehan & Davidson, 2012). Additionally, this research has shown that technology-based applications may be a promising means of delivering passenger safety messages, particularly as young people are increasingly accessing web-based and mobile technologies. This research describes the participatory design process undertaken to develop a web-based road safety program, and involves feasibility testing of storyboards for a youth passenger safety application. Storyboards and framework web-based materials were initially developed for a passenger safety program, using the results of previous studies involving online and school-based surveys with young people. Focus groups were then conducted with 8 school staff and 30 senior school students at one public high school in the Australian Capital Territory. Young people were asked about the situations in which passengers may feel unsafe and potential strategies for intervening in their peers’ risky driving. Students were also shown the storyboards and framework web-based material and were asked to comment on design and content issues. Teachers were also shown the material and asked about their perceptions of program design and feasibility. The focus group data will be used as part of the participatory design process, in further developing the passenger safety program. This research describes an evidence-based approach to the development of a web-based application for youth passenger safety. The findings of this research and resulting technology will have important implications for the road safety education of senior high school students.
Resumo:
Background Cancer outlier profile analysis (COPA) has proven to be an effective approach to analyzing cancer expression data, leading to the discovery of the TMPRSS2 and ETS family gene fusion events in prostate cancer. However, the original COPA algorithm did not identify down-regulated outliers, and the currently available R package implementing the method is similarly restricted to the analysis of over-expressed outliers. Here we present a modified outlier detection method, mCOPA, which contains refinements to the outlier-detection algorithm, identifies both over- and under-expressed outliers, is freely available, and can be applied to any expression dataset. Results We compare our method to other feature-selection approaches, and demonstrate that mCOPA frequently selects more-informative features than do differential expression or variance-based feature selection approaches, and is able to recover observed clinical subtypes more consistently. We demonstrate the application of mCOPA to prostate cancer expression data, and explore the use of outliers in clustering, pathway analysis, and the identification of tumour suppressors. We analyse the under-expressed outliers to identify known and novel prostate cancer tumour suppressor genes, validating these against data in Oncomine and the Cancer Gene Index. We also demonstrate how a combination of outlier analysis and pathway analysis can identify molecular mechanisms disrupted in individual tumours. Conclusions We demonstrate that mCOPA offers advantages, compared to differential expression or variance, in selecting outlier features, and that the features so selected are better able to assign samples to clinically annotated subtypes. Further, we show that the biology explored by outlier analysis differs from that uncovered in differential expression or variance analysis. mCOPA is an important new tool for the exploration of cancer datasets and the discovery of new cancer subtypes, and can be combined with pathway and functional analysis approaches to discover mechanisms underpinning heterogeneity in cancers
Resumo:
The Chemistry Discipline Network was funded in mid-2011, with the aim of improving communication between chemistry academics in Australia. In our first year of operation, we have grown to over 100 members, established a web presence, and produced substantial mapping reports on chemistry teaching in Australia. We are now working on the definition of standards for a chemistry degree based on the Threshold Learning Outcomes published by the Learning and Teaching Academic Standards Project.
Resumo:
Traditional approaches to teaching criminal law in Australian law schools include lectures that focus on the transmission of abstracted and decontextualised knowledge, with content often prioritised at the expense of depth. This paper discusses The Sapphire Vortex, a blended learning environment that combines a suite of on-line modules using Second Life machinima to depict a narrative involving a series of criminal offences and the ensuing courtroom proceedings, expert commentary by practising lawyers and class discussions.
Resumo:
RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.
Resumo:
As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. In order to enhance customer satisfaction and their shopping experiences, it has become important to analysis customers reviews to extract opinions on the products that they buy. Thus, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes.
Resumo:
In order to comprehend user information needs by concepts, this paper introduces a novel method to match relevance features with ontological concepts. The method first discovers relevance features from user local instances. Then, a concept matching approach is developed for matching these features to accurate concepts in a global knowledge base. This approach is significant for the transition of informative descriptor and conceptional descriptor. The proposed method is elaborately evaluated by comparing against three information gathering baseline models. The experimental results shows the matching approach is successful and achieves a series of remarkable improvements on search effectiveness.