810 resultados para incremental EM
Resumo:
Background As financial constraints can be a barrier to accessing HIV antiretroviral therapy (ART), we argue for the removal of copayment requirements from HIV medications in South Australia. Methods Using a simple mathematical model informed by available behavioural and biological data and reflecting the HIV epidemiology in South Australia, we calculated the expected number of new HIV transmissions caused by persons who are not currently on ART compared with transmissions for people on ART. The extra financial investment required to cover the copayments to prevent an HIV infection was compared with the treatment costs saved due to averting HIV infections. Results It was estimated that one HIV infection is prevented per year for every 31.4 persons (median, 24.0–42.7 interquartile range (IQR)) who receive treatment. By considering the incremental change in costs and outcomes of a change in program from the current status quo, it would cost the health sector $17 860 per infection averted (median, $13 651–24 287 IQR) if ART is provided as a three-dose, three-drug combination without requirements for user-pay copayments. Conclusions The costs of removing copayment fees for ART are less than the costs of treating extra HIV infections that would result under current conditions. Removing the copayment requirement for HIV medication would be cost-effective from a governmental perspective.
Resumo:
Product Ecosystem theory is an emerging theory that shows that disruptive “game changing” innovation is only possible when the entire ecosystem is considered. When environmental variables change faster than products or services can adapt, disruptive innovation is required to keep pace. This has many parallels with natural ecosystems where species that cannot keep up with changes to the environment will struggle or become extinct. In this case the environment is the city, the environmental pressures are pollution and congestion, the product is the car and the product ecosystem is comprised of roads, bridges, traffic lights, legislation, refuelling facilities etc. Each one of these components is the responsibility of a different organisation and so any change that affects the whole ecosystem requires a transdisciplinary approach. As a simple example, cars that communicate wirelessly with traffic lights are only of value if wireless-enabled traffic lights exist and vice versa. Cars that drive themselves are technically possible but legislation in most places doesn’t allow their use. According to innovation theory, incremental innovation tends to chase ever diminishing returns and becomes increasingly unable to tackle the “big issues.” Eventually “game changing” disruptive innovation comes along and solves the “big issues” and/or provides new opportunities. Seen through this lens, the environmental pressures of urban traffic congestion and pollution are the “big issues.” It can be argued that the design of cars and the other components of the product ecosystem follow an incremental innovation approach. That is why the “big issues” remain unresolved. This paper explores the problems of pollution and congestion in urban environments from a Product Ecosystem perspective. From this a strategy will be proposed for a transdisciplinary approach to develop and implement solutions.
Resumo:
Active learning approaches reduce the annotation cost required by traditional supervised approaches to reach the same effectiveness by actively selecting informative instances during the learning phase. However, effectiveness and robustness of the learnt models are influenced by a number of factors. In this paper we investigate the factors that affect the effectiveness, more specifically in terms of stability and robustness, of active learning models built using conditional random fields (CRFs) for information extraction applications. Stability, defined as a small variation of performance when small variation of the training data or a small variation of the parameters occur, is a major issue for machine learning models, but even more so in the active learning framework which aims to minimise the amount of training data required. The factors we investigate are a) the choice of incremental vs. standard active learning, b) the feature set used as a representation of the text (i.e., morphological features, syntactic features, or semantic features) and c) Gaussian prior variance as one of the important CRFs parameters. Our empirical findings show that incremental learning and the Gaussian prior variance lead to more stable and robust models across iterations. Our study also demonstrates that orthographical, morphological and contextual features as a group of basic features play an important role in learning effective models across all iterations.
Resumo:
Aortic root replacement is a complex procedure, though subsequent modifications of the original Bentall procedure have made surgery more reproducible. The study aim was to examine the outcomes of a modified Bentall procedure, using the Medtronic Open PivotTM valved conduit. Whilst short-term data on the conduit and long-term data on the valve itself are available, little is known of the long-term results with the valved conduit. Patients undergoing aortic root replacement between February 1999 and February 2010, using the Medtronic Open Pivot valved conduit were identified from the prospectively collected Cardiothoracic Register at The Prince Charles Hospital, Brisbane, Australia. All patients were followed up echocardiographically and clinically. The primary end-point was death, and a Cox proportional model was used to identify factors associated.with survival. Secondary end-points were valve-related morbidity (as defined by STS guidelines) and postoperative morbidity. Predictors of morbidity were identified using logistic regression. A total of 246 patients (mean age 50 years) was included in the study. The overall mortality was 12%, with actuarial 10-year survival 79% and a 10-year estimate of valve-related death of 0.04 (95% CI: 0.004, 0.07). Preoperative myocardial infarction (p = 0.004, HR 4.74), urgency of operation (p = 0.038, HR 2.8) and 10% incremental decreases in ejection fraction (p = 0.046, HR 0.69) were predictive of mortality. Survival was also affected by the valve gradients, with a unit increase in peak gradient reducing mortality (p = 0.021, HR 0.93). Valve-related morbidity occurred in 11 patients. Urgent surgery (p <0.001, OR 4.12), aortic dissection (p = 0.015, OR 3.35), calcific aortic stenosis (p = 0.016, OR 2.35) and Marfan syndrome (p 0.009, OR 3.75) were predictive of postoperative morbidity. The reoperation rate was 1.2%. The Medtronic Open Pivot valved conduit is a safe and durable option for aortic root replacement, and is associated with low morbidity and 10-year survival of 79%. However, further studies are required to determine the effect of valve gradient on survival.
Resumo:
A modification to the PVA-FX hydrogel whereby the chelating agent, xylenol orange, was partially bonded to the gelling agent, poly-vinyl alcohol, resulted in an 8% reduction in the post irradiation Fe3+ diffusion, adding approximately 1 hour to the useful timespan between irradiation and readout. This xylenol orange functionalised poly-vinyl alcohol hydrogel had an OD dose sensitivity of 0.014 Gy−1 and a diffusion rate of 0.133 mm2 h−1. As this partial bond yields only incremental improvement, it is proposed that more efficient methods of bonding xylenol orange to poly-vinyl alcohol be investigated to further reduce the diffusion in Fricke gels.
Resumo:
Australia currently has a small generic and biosimilar industry despite having a good track record in biomedical research and a sound reputation in producing high quality but small volume biological pharmaceuticals. In recent times, Australia has made incremental changes to its regulation of biosimilars – in patent registration, in the use of commercial confidential information, and in remuneration. These improvements, together with Australia’s geographical proximity and strong trade relationship with the Asian biocluster have positioned Australia to take advantage of potential public cost savings from the increased use of biosimilars.
Resumo:
In the current regulatory climate, there is increasing expectation that law schools will be able to demonstrate students’ acquisition of learning outcomes regarding collaboration skills. We argue that this is best achieved through a stepped and structured whole-of-curriculum approach to small group learning. ‘Group work’ provides deep learning and opportunities to develop professional skills, but these benefits are not always realised for law students. An issue is that what is meant by ‘group work’ is not always clear, resulting in a learning regime that may not support the attainment of desired outcomes. This paper describes different types of ‘group work', each associated with distinct learning outcomes. It suggests that ‘group work’ as an umbrella term to describe these types is confusing, as it provides little indication to students and teachers of the type of learning that is valued and is expected to take place. ‘Small group learning’ is a preferable general descriptor. Identifying different types of small group learning allows law schools to develop and demonstrate a scaffolded, sequential and incremental approach to fostering law students’ collaboration skills. To support learning and the acquisition of higherorder skills, different types of small group learning are more appropriate at certain stages of the program. This structured approach is consistent with social cognitive theory, which suggests that with the guidance of a supportive teacher, students can develop skills and confidence in one type of activity which then enhances motivation to participate in another.
Resumo:
In recommender systems based on multidimensional data, additional metadata provides algorithms with more information for better understanding the interaction between users and items. However, most of the profiling approaches in neighbourhood-based recommendation approaches for multidimensional data merely split or project the dimensional data and lack the consideration of latent interaction between the dimensions of the data. In this paper, we propose a novel user/item profiling approach for Collaborative Filtering (CF) item recommendation on multidimensional data. We further present incremental profiling method for updating the profiles. For item recommendation, we seek to delve into different types of relations in data to understand the interaction between users and items more fully, and propose three multidimensional CF recommendation approaches for top-N item recommendations based on the proposed user/item profiles. The proposed multidimensional CF approaches are capable of incorporating not only localized relations of user-user and/or item-item neighbourhoods but also latent interaction between all dimensions of the data. Experimental results show significant improvements in terms of recommendation accuracy.
Resumo:
The present contribution deals with the numerical modelling of railway track-supporting systems-using coupled finite-infinite elements-to represent the near and distant field stress distribution, and also employing a thin layer interface element to account for the interfacial behaviour between sleepers and ballast. To simulate the relative debonding, slipping and crushing at the contact area between sleepers and ballast, a modified Mohr-Coulomb criterion was adopted. Further more an attempt was made to consider the elasto plastic materials’ non-linearity of the railway track supporting media by employing different constitutive models to represent steel, concrete and other supporting materials. It is seen that during an incremental-iterative mode of load application, the yielding initially started from the edge of the sleepers and then flowed vertically downwards and spread towards the centre of the railway supporting system.
Resumo:
The Bruneau-Jarbidge eruptive center (BJEC) in the central Snake River Plain, Idaho, USA consists of the Cougar Point Tuff (CPT), a series of ten, high-temperature (900-1000°C) voluminous ignimbrites produced over the explosive phase of volcanism (12.8-10.5 Ma) and more than a dozen equally high-temperature rhyolite lava flows produced during the effusive phase (10.5-8 Ma). Spot analyses by ion microprobe of oxygen isotope ratios in 210 zircons demonstrate that all of the eruptive units of the BJEC are characterized by zircon δ¹⁸O values ≤ 2.5‰, thus documenting the largest low δ¹⁸O silicic volcanic province known on Earth (>10⁴ km³). There is no evidence for voluminous normal δ¹⁸O magmatism at the BJEC that precedes generation of low δ¹⁸O magmas as there is at other volcanic centers that generate low δ¹⁸O magmas such as Heise and Yellowstone. At these younger volcanic centers of the hotspot track, such low δ¹⁸O magmas represent ~45 % and ~20% respectively of total eruptive volumes. Zircons in all BJEC tuffs and lavas studied (23 units) document strong δ¹⁸O depletion (median CPT δ¹⁸OZrc = 1.0‰, post-CPT lavas = 1.5‰) with the third member of the CPT recording an excursion to minimum δ¹⁸O values (δ¹⁸OZrc= -1.8‰) in a supereruption > 2‰ lower than other voluminous low δ¹⁸O rhyolites known worldwide (δ¹⁸OWR ≤0.9 vs. 3.4‰). Subsequent units of the CPT and lavas record a progressive recovery in δ¹⁸OZrc to ~2.5‰ over a ~ 4 m.y. interval (12 to 8 Ma). We present detailed evidence of unit-to-unit systematic patterns in O isotopic zoning in zircons (i.e. direction and magnitude of Δcore-rim), spectrum of δ¹⁸O in individual units, and zircon inheritance patterns established by re-analysis of spots for U-Th-Pb isotopes by LA-ICPMS and SHRIMP. In conjunction with mineral thermometry and magma compositions, these patterns are difficult to reconcile with the well-established model for "cannibalistic" low δ¹⁸O magma genesis at Heise and Yellowstone. We present an alternative model for the central Snake River Plain using the modeling results of Leeman et al. (2008) for ¹⁸O depletion as a function of depth in a mid-upper crustal protolith that was hydrothermally altered by infiltrating meteoric waters prior to the onset of silicic magmatism. The model proposes that BJEC silicic magmas were generated in response to the propagation of a melting front, driven by the incremental growth of a vast underlying mafic sill complex, over a ~5 m.y. interval through a crustal volume in which a vertically asymmetric δ¹⁸OWR gradient had previously developed that was sharply inflected from ~ -1 to 10‰ at mid-upper crustal depths. Within the context of the model, data from BJEC zircons are consistent with incremental melting and mixing events in roof zones of magma reservoirs that accompany surfaceward advance of the coupled mafic-silicic magmatic system.
Resumo:
Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.
Resumo:
The context in which objects are presented influences the speed at which they are named. We employed the blocked cyclic naming paradigm and perfusion functional magnetic resonance imaging (fMRI) to investigate the mechanisms responsible for interference effects reported for thematicallyand categorically related compared to unrelated contexts. Naming objects in categorically homogeneous contexts induced a significant interference effect that accumulated from the second cycle onwards. This interference effect was associated with significant perfusion signal decreases in left middle and posterior lateral temporal cortex and the hippocampus. By contrast, thematically homogeneous contexts facilitated naming latencies significantly in the first cycle and did not differ from heterogeneous contexts thereafter, nor were they associated with any perfusion signal changes compared to heterogeneous contexts. These results are interpreted as being consistent with an account in which the interference effect both originates and has its locus at the lexical level, with an incremental learning mechanism adapting the activation levels of target lexical representations following access. We discuss the implications of these findings for accounts that assume thematic relations can be active lexical competitors or assume mandatory involvement of top-down control mechanisms in interference effects during naming.
Resumo:
Word frequency (WF) and strength effects are two important phenomena associated with episodic memory. The former refers to the superior hit-rate (HR) for low (LF) compared to high frequency (HF) words in recognition memory, while the latter describes the incremental effect(s) upon HRs associated with repeating an item at study. Using the "subsequent memory" method with event-related fMRI, we tested the attention-at-encoding (AE) [M. Glanzer, J.K. Adams, The mirror effect in recognition memory: data and theory, J. Exp. Psychol.: Learn Mem. Cogn. 16 (1990) 5-16] explanation of the WF effect. In addition to investigating encoding strength, we addressed if study involves accessing prior representations of repeated items via the same mechanism as that at test [J.L. McClelland, M. Chappell, Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory, Psychol. Rev. 105 (1998) 724-760], entailing recollection [K.J. Malmberg, J.E. Holden, R.M. Shiffrin, Modeling the effects of repetitions, similarity, and normative word frequency on judgments of frequency and recognition memory, J. Exp. Psychol.: Learn Mem. Cogn. 30 (2004) 319-331] and whether less processing effort is entailed for encoding each repetition [M. Cary, L.M. Reder, A dual-process account of the list-length and strength-based mirror effects in recognition, J. Mem. Lang. 49 (2003) 231-248]. The increased BOLD responses observed in the left inferior prefrontal cortex (LIPC) for the WF effect provide support for an AE account. Less effort does appear to be required for encoding each repetition of an item, as reduced BOLD responses were observed in the LIPC and left lateral temporal cortex; both regions demonstrated increased responses in the conventional subsequent memory analysis. At test, a left lateral parietal BOLD response was observed for studied versus unstudied items, while only medial parietal activity was observed for repeated items at study, indicating that accessing prior representations at encoding does not necessarily occur via the same mechanism as that at test, and is unlikely to involve a conscious recall-like process such as recollection. This information may prove useful for constraining cognitive theories of episodic memory.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
It's akin to the old Spanish, English and Portuguese explorers. They would take their boats until they found some edge of land, then they would go up and plant the flag of their king or queen. They didn't know what they'd discovered; how big it is, where it goes to - but they would claim it anyway. David Korn of the Association of American Medical Colleges This article analyses recent litigation over patent law and expressed sequence tags (ESTs). In the case of In re Fisher, the United States Court of Appeals for the Federal Circuit engaged in judicial consideration of the revised utility guidelines of the United States Patent and Trademark Office (USPTO). In this matter, the agricultural biotechnology company Monsanto sought to patent ESTs in maize plants. A patent examiner and the Board of Patent Appeals and Interferences had doubted whether the patent application was useful. Monsanto appealed against the rulings of the USPTO. A number of amicus curiae intervened in the matter in support of the USPTO - including Genentech, Affymetrix, Dow AgroSciences, Eli Lilly, the National Academy of Sciences, and the Association of American Medical Colleges. The majority of the Court of Appeals for the Federal Circuit supported the position of the USPTO, and rejected the patent application on the grounds of utility. The split decision highlighted institutional tensions over the appropriate thresholds for patent criteria - such as novelty, non-obviousness, and utility. The litigation raised larger questions about the definition of research tools, the incremental nature of scientific progress, and the role of patent law in innovation policy. The decision of In re Fisher will have significant ramifications for gene patents, in the wake of the human genome project. Arguably, the USPTO utility guidelines need to be reinforced by a tougher application of the standards of novelty and non-obviousness in respect of gene patents.