903 resultados para Principle component
Resumo:
Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.
Resumo:
Much research has investigated the differences between option implied volatilities and econometric model-based forecasts. Implied volatility is a market determined forecast, in contrast to model-based forecasts that employ some degree of smoothing of past volatility to generate forecasts. Implied volatility has the potential to reflect information that a model-based forecast could not. This paper considers two issues relating to the informational content of the S&P 500 VIX implied volatility index. First, whether it subsumes information on how historical jump activity contributed to the price volatility, followed by whether the VIX reflects any incremental information pertaining to future jump activity relative to model-based forecasts. It is found that the VIX index both subsumes information relating to past jump contributions to total volatility and reflects incremental information pertaining to future jump activity. This issue has not been examined previously and expands our understanding of how option markets form their volatility forecasts.
Resumo:
The development of research data management infrastructure and services and making research data more discoverable and accessible to the research community is a key priority at the national, state and individual university level. This paper will discuss and reflect upon a collaborative project between Griffith University and the Queensland University of Technology to commission a Metadata Hub or Metadata Aggregation service based upon open source software components. It will describe the role that metadata aggregation services play in modern research infrastructure and argue that this role is a critical one.
Resumo:
Developmental progression and differentiation of distinct cell types depend on the regulation of gene expression in space and time. Tools that allow spatial and temporal control of gene expression are crucial for the accurate elucidation of gene function. Most systems to manipulate gene expression allow control of only one factor, space or time, and currently available systems that control both temporal and spatial expression of genes have their limitations. We have developed a versatile two-component system that overcomes these limitations, providing reliable, conditional gene activation in restricted tissues or cell types. This system allows conditional tissue-specific ectopic gene expression and provides a tool for conditional cell type- or tissue-specific complementation of mutants. The chimeric transcription factor XVE, in conjunction with Gateway recombination cloning technology, was used to generate a tractable system that can efficiently and faithfully activate target genes in a variety of cell types. Six promoters/enhancers, each with different tissue specificities (including vascular tissue, trichomes, root, and reproductive cell types), were used in activation constructs to generate different expression patterns of XVE. Conditional transactivation of reporter genes was achieved in a predictable, tissue-specific pattern of expression, following the insertion of the activator or the responder T-DNA in a wide variety of positions in the genome. Expression patterns were faithfully replicated in independent transgenic plant lines. Results demonstrate that we can also induce mutant phenotypes using conditional ectopic gene expression. One of these mutant phenotypes could not have been identified using noninducible ectopic gene expression approaches.
Resumo:
We report the long term outcome of the flangeless, cemented all polyethylene Exeter cup at a mean of 14.6 years (range 10-17) after operation. Of the 263 hips in 243 patients, 122 hips are still in situ, 112 patients (119 hips) have died, eighteen hips were revised, and three patients (four hips) had moved abroad and were lost to follow-up (1.5%). Radiographs demonstrated two sockets had migrated and six more had radiolucent lines in all three zones. The Kaplan Meier survivorship at 15 years with endpoint revision for all causes is 89.9% (95% CI 84.6 to 95.2%) and for aseptic cup loosening or lysis 91.7% (CI 86.6 to 96.8%). In 210 hips with a diagnosis of primary osteoarthritis survivorship for all causes is 93.2% (95% CI 88.1 to 98.3%), and for aseptic cup loosening 95.0% (CI 90.3 to 99.7%). The cemented all polyethylene Exeter cup has an excellent long-term survivorship.
Resumo:
Background: The “Curriculum renewal in legal education” project has been funded by the Australian Learning and Teaching Council with the core objectives being the articulation of a set of final year curriculum design principles, and the development of a model of a transferable final year program. Through these principles and the development of the model, it is anticipated that the final year experience for law students will provide greater opportunity for them to understand the relevance of their learning, and will enhance their capacity to make decisions regarding their career path. Discussion / Argument: This paper reports on the project’s progress to date, and presents an argument for the inclusion of work integrated learning (WIL) as a component of the final year experience in undergraduate law programs. The project has identified that the two principal objectives of capstone experiences are to provide closure and to facilitate transition to post-university life. Reflective practice and Bruner’s spiral curriculum model are the central theoretical foundations by which these objectives can be achieved. Experiential learning is also increasingly seen as an essential element of a capstone experience. WIL is consistent with the objectives of capstones in focusing on the transition to professional practice and providing opportunities for reflection. However, the ability of WIL to meet all of the objectives of capstones, particularly closure and integration, may be limited. Conclusions / Implications: The paper posits that while WIL should be considered as a potential component of a capstone experience, educators should ensure that WIL is not equated with a capstone experience unless it is carefully designed to ensure that all of the objectives of capstones are met. Keywords: Work-integrated learning, capstone, final year experience, law
Resumo:
Maintenance activities in a large-scale engineering system are usually scheduled according to the lifetimes of various components in order to ensure the overall reliability of the system. Lifetimes of components can be deduced by the corresponding probability distributions with parameters estimated from past failure data. While failure data of the components is not always readily available, the engineers have to be content with the primitive information from the manufacturers only, such as the mean and standard deviation of lifetime, to plan for the maintenance activities. In this paper, the moment-based piecewise polynomial model (MPPM) are proposed to estimate the parameters of the reliability probability distribution of the products when only the mean and standard deviation of the product lifetime are known. This method employs a group of polynomial functions to estimate the two parameters of the Weibull Distribution according to the mathematical relationship between the shape parameter of two-parameters Weibull Distribution and the ratio of mean and standard deviation. Tests are carried out to evaluate the validity and accuracy of the proposed methods with discussions on its suitability of applications. The proposed method is particularly useful for reliability-critical systems, such as railway and power systems, in which the maintenance activities are scheduled according to the expected lifetimes of the system components.
Resumo:
Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.
Resumo:
The purpose of this work is to validate and automate the use of DYNJAWS; a new component module (CM) in the BEAMnrc Monte Carlo (MC) user code. The DYNJAWS CM simulates dynamic wedges and can be used in three modes; dynamic, step-and-shoot and static. The step-and-shoot and dynamic modes require an additional input file defining the positions of the jaw that constitutes the dynamic wedge, at regular intervals during its motion. A method for automating the generation of the input file is presented which will allow for the more efficient use of the DYNJAWS CM. Wedged profiles have been measured and simulated for 6 and 10 MV photons at three field sizes (5 cm x 5 cm , 10 cm x10 cm and 20 cm x 20 cm), four wedge angles (15, 30, 45 and 60 degrees), at dmax and at 10 cm depth. Results of this study show agreement between the measured and the MC profiles to within 3% of absolute dose or 3 mm distance to agreement for all wedge angles at both energies and depths. The gamma analysis suggests that dynamic mode is more accurate than the step-and-shoot mode. The DYNJAWS CM is an important addition to the BEAMnrc code and will enable the MC verification of patient treatments involving dynamic wedges.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.