14 resultados para Subsequential Completeness

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resetting of previously accumulated optically stimulated luminescence (OSL) signals during transport of sediment is a fundamental requirement for reliable optical dating. The completeness of optical resetting of 46 modern-age quartz samples from a variety of depositional environments was examined. All equivalent dose (De) estimates were View the MathML source, with the majority of aeolian samples View the MathML source, and fluvial samples View the MathML source. The OSL signal of quartz originates from several trap types with different rates of charge loss during illumination. As such, incomplete bleaching may be identifiable as an increase in De from easy-to-bleach through to hard-to-bleach components. For all modern fluvial samples with non-zero De values, SAR De(t) analysis and component-resolved linearly modulated OSL (LM OSL) De estimates showed this to be the case, implying incomplete resetting of previously accumulated charge. LM OSL measurements were also made to investigate the extent of bleaching of the slow components in the natural environment. In aeolian sediments examined, the natural LM OSL was effectively zero (i.e. all components were fully reset). The slow components of modern fluvial samples displayed measurable residual signals up to 15 Gy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give a non-commutative generalization of classical symbolic coding in the presence of a synchronizing word. This is done by a scattering theoretical approach. Classically, the existence of a synchronizing word turns out to be equivalent to asymptotic completeness of the corresponding Markov process. A criterion for asymptotic completeness in general is provided by the regularity of an associated extended transition operator. Commutative and non-commutative examples are analysed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The separation of mixtures of proteins by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) is a technique that is widely used—and, indeed, this technique underlies many of the assays and analyses that are described in this book. While SDS-PAGE is routine in many labs, a number of issues require consideration before embarking on it for the first time. We felt, therefore, that in the interest of completeness of this volume, a brief chapter describing the basics of SDS-PAGE would be helpful. Also included in this chapter are protocols for the staining of SDS-PAGE gels to visualize separated proteins, and for the electrotransfer of proteins to a membrane support (Western blotting) to enable immunoblotting, for example. This chapter is intended to complement the chapters in this book that require these techniques to be performed. Therefore, detailed examples of why and when these techniques could be used will not be discussed here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the on-going research performed in order to integrate process automation and process management support in the context of media production. This has been addressed on the basis of a holistic approach to software engineering applied to media production modelling to ensure design correctness, completeness and effectiveness. The focus of the research and development has been to enhance the metadata management throughout the process in a similar fashion to that achieved in Decision Support Systems (DSS) to facilitate well-grounded business decisions. The paper sets out the aims and objectives and the methodology deployed. The paper describes the solution in some detail and sets out some preliminary conclusions and the planned future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel topology of the radial basis function (RBF) neural network, referred to as the boundary value constraints (BVC)-RBF, which is able to automatically satisfy a set of BVC. Unlike most existing neural networks whereby the model is identified via learning from observational data only, the proposed BVC-RBF offers a generic framework by taking into account both the deterministic prior knowledge and the stochastic data in an intelligent manner. Like a conventional RBF, the proposed BVC-RBF has a linear-in-the-parameter structure, such that it is advantageous that many of the existing algorithms for linear-in-the-parameters models are directly applicable. The BVC satisfaction properties of the proposed BVC-RBF are discussed. Finally, numerical examples based on the combined D-optimality-based orthogonal least squares algorithm are utilized to illustrate the performance of the proposed BVC-RBF for completeness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planning of autonomous vehicles in the absence of speed lanes is a less-researched problem. However, it is an important step toward extending the possibility of autonomous vehicles to countries where speed lanes are not followed. The advantages of having nonlane-oriented traffic include larger traffic bandwidth and more overtaking, which are features that are highlighted when vehicles vary in terms of speed and size. In the most general case, the road would be filled with a complex grid of static obstacles and vehicles of varying speeds. The optimal travel plan consists of a set of maneuvers that enables a vehicle to avoid obstacles and to overtake vehicles in an optimal manner and, in turn, enable other vehicles to overtake. The desired characteristics of this planning scenario include near completeness and near optimality in real time with an unstructured environment, with vehicles essentially displaying a high degree of cooperation and enabling every possible(safe) overtaking procedure to be completed as soon as possible. Challenges addressed in this paper include a (fast) method for initial path generation using an elastic strip, (re-)defining the notion of completeness specific to the problem, and inducing the notion of cooperation in the elastic strip. Using this approach, vehicular behaviors of overtaking, cooperation, vehicle following,obstacle avoidance, etc., are demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Massive Open Online Courses (MOOCs) have become immensely popular in a short span of time. However, there is very little research exploring MOOCs in the discipline of Health and Medicine. This paper is aimed to fill this void by providing a review of Health and Medicine related MOOCs. Objective: Provide a review of Health and Medicine related MOOCs offered by various MOOC platforms within the year 2013. Analyze and compare the various offerings, their target audience, typical length of a course and credentials offered. Discuss opportunities and challenges presented by MOOCs in the discipline of Health and Medicine. Methods: Health and Medicine related MOOCs were gathered using several methods to ensure the richness and completeness of data. Identified MOOC platform websites were used to gather the lists of offerings. In parallel, these MOOC platforms were contacted to access official data on their offerings. Two MOOC aggregator sites (Class Central and MOOC List) were also consulted to gather data on MOOC offerings. Eligibility criteria were defined to concentrate on the courses that were offered in 2013 and primarily on the subject ‘Health and Medicine’. All language translations in this paper were achieved using Google Translate. Results: The search identified 225 courses out of which 98 were eligible for the review (n = 98). 58% (57) of the MOOCs considered were offered on the Coursera platform and 94% (92) of all the MOOCs were offered in English. 90 MOOCs were offered by universities and the John Hopkins University offered the largest number of MOOCs (12). Only three MOOCs were offered by developing countries (China, West Indies, and Saudi Arabia). The duration of MOOCs varied from three weeks to 20 weeks with an average length of 6.7 weeks. On average MOOCs expected a participant to work on the material for 4.2 hours a week. Verified Certificates were offered by 14 MOOCs while three others offered other professional recognition. Conclusions: The review presents evidence to suggest that MOOCs can be used as a way to provide continuous medical education. It also shows the potential of MOOCs as a means of increasing health literacy among the public.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The English Improving Access to Psychological Therapies (IAPT) initiative aims to make evidence-based psychological therapies for depression and anxiety disorder more widely available in the National Health Service (NHS). 32 IAPT services based on a stepped care model were established in the first year of the programme. We report on the reliable recovery rates achieved by patients treated in the services and identify predictors of recovery at patient level, service level, and as a function of compliance with National Institute of Health and Care Excellence (NICE) Treatment Guidelines. METHOD: Data from 19,395 patients who were clinical cases at intake, attended at least two sessions, had at least two outcomes scores and had completed their treatment during the period were analysed. Outcome was assessed with the patient health questionnaire depression scale (PHQ-9) and the anxiety scale (GAD-7). RESULTS: Data completeness was high for a routine cohort study. Over 91% of treated patients had paired (pre-post) outcome scores. Overall, 40.3% of patients were reliably recovered at post-treatment, 63.7% showed reliable improvement and 6.6% showed reliable deterioration. Most patients received treatments that were recommended by NICE. When a treatment not recommended by NICE was provided, recovery rates were reduced. Service characteristics that predicted higher reliable recovery rates were: high average number of therapy sessions; higher step-up rates among individuals who started with low intensity treatment; larger services; and a larger proportion of experienced staff. CONCLUSIONS: Compliance with the IAPT clinical model is associated with enhanced rates of reliable recovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Incorporating an emerging therapy as a new randomisation arm in a clinical trial that is open to recruitment would be desirable to researchers, regulators and patients to ensure that the trial remains current, new treatments are evaluated as quickly as possible, and the time and cost for determining optimal therapies is minimised. It may take many years to run a clinical trial from concept to reporting within a rapidly changing drug development environment; hence, in order for trials to be most useful to inform policy and practice, it is advantageous for them to be able to adapt to emerging therapeutic developments. This paper reports a comprehensive literature review on methodologies for, and practical examples of, amending an ongoing clinical trial by adding a new treatment arm. Relevant methodological literature describing statistical considerations required when making this specific type of amendment is identified, and the key statistical concepts when planning the addition of a new treatment arm are extracted, assessed and summarised. For completeness, this includes an assessment of statistical recommendations within general adaptive design guidance documents. Examples of confirmatory ongoing trials designed within the frequentist framework that have added an arm in practice are reported; and the details of the amendment are reviewed. An assessment is made as to how well the relevant statistical considerations were addressed in practice, and the related implications. The literature review confirmed that there is currently no clear methodological guidance on this topic, but that guidance would be advantageous to help this efficient design amendment to be used more frequently and appropriately in practice. Eight confirmatory trials were identified to have added a treatment arm, suggesting that trials can benefit from this amendment and that it can be practically feasible; however, the trials were not always able to address the key statistical considerations, often leading to uninterpretable or invalid outcomes. If the statistical concepts identified within this review are considered and addressed during the design of a trial amendment, it is possible to effectively assess a new treatment arm within an ongoing trial without compromising the original trial outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to seek to shed light on the practice of incomplete corporate disclosure of quantitative Greenhouse gas (GHG) emissions and investigates whether external stakeholder pressure influences the existence, and separately, the completeness of voluntary GHG emissions disclosures by 431 European companies. Design/methodology/approach – A classification of reporting completeness is developed with respect to the scope, type and reporting boundary of GHG emissions based on the guidelines of the GHG Protocol, Global Reporting Initiative and the Carbon Disclosure Project. Logistic regression analysis is applied to examine whether proxies for exposure to climate change concerns from different stakeholder groups influence the existence and/or completeness of quantitative GHG emissions disclosure. Findings – From 2005 to 2009, on average only 15 percent of companies that disclose GHG emissions report them in a manner that the authors consider complete. Results of regression analyses suggest that external stakeholder pressure is a determinant of the existence but not the completeness of emissions disclosure. Findings are consistent with stakeholder theory arguments that companies respond to external stakeholder pressure to report GHG emissions, but also with legitimacy theory claims that firms can use carbon disclosure, in this case the incomplete reporting of emissions, as a symbolic act to address legitimacy exposures. Practical implications – Bringing corporate GHG emissions disclosure in line with recommended guidelines will require either more direct stakeholder pressure or, perhaps, a mandated disclosure regime. In the meantime, users of the data will need to carefully consider the relevance of the reported data and develop the necessary competencies to detect and control for its incompleteness. A more troubling concern is that stakeholders may instead grow to accept less than complete disclosure. Originality/value – The paper represents the first large-scale empirical study into the completeness of companies’ disclosure of quantitative GHG emissions and is the first to analyze these disclosures in the context of stakeholder pressure and its relation to legitimation.