65 resultados para A priori
Resumo:
Investigation of the epigenome of sporadic pituitary tumours is providing a more detailed understanding of aberrations that characterise this tumour type. Early studies, in this and other tumour types adopted candidate-gene approaches to characterise CpG island methylation as a mechanism responsible for or associated with gene silencing. However, more recently, investigators have adopted approaches that do not require a priori knowledge of the gene and transcript, as example differential display techniques, and also genome-wide, array-based approaches, to 'uncover' or 'unmask' silenced genes. Furthermore, through use of chromatin immunoprecipitation as a selective enrichment technique; we are now beginning to identify modifications that target the underlying histones themselves and that have roles in gene-silencing events. Collectively, these studies provided convincing evidence that change to the tumour epigenome are not simply epiphenomena but have functional consequences in the context of pituitary tumour evolution. Our ability to perform these types of studies has been and is increasingly reliant upon technological advances in the genomics and epigenomics arena. In this context, other more recent advances and developing technologies, and, in particular, next generation or flow cell re-sequencing techniques offer exciting opportunities for our future studies of this tumour type.
Resumo:
Background While the burden of chronic cough in children has been documented, etiologic factors across multiple settings and age have not been described. In children with chronic cough, we aimed (1) to evaluate the burden and etiologies using a standard management pathway in various settings, and (2) to determine the influence of age and setting on disease burden and etiologies and etiology on disease burden. We hypothesized that the etiology, but not the burden, of chronic cough in children is dependent on the clinical setting and age. Methods From five major hospitals and three rural-remote clinics, 346 children (mean age 4.5 years) newly referred with chronic cough (> 4 weeks) were prospectively managed in accordance with an evidence-based cough algorithm. We used a priori definitions, timeframes, and validated outcome measures (parent-proxy cough-specific quality of life [PC-QOL], a generic QOL [pediatric quality of life (PedsQL)], and cough diary). Results The burden of chronic cough (PC-QOL, cough duration) significantly differed between settings (P = .014, 0.021, respectively), but was not influenced by age or etiology. PC-QOL and PedsQL did not correlate with age. The frequency of etiologies was significantly different in dissimilar settings (P = .0001); 17.6% of children had a serious underlying diagnosis (bronchiectasis, aspiration, cystic fibrosis). Except for protracted bacterial bronchitis, the frequency of other common diagnoses (asthma, bronchiectasis, resolved without specific-diagnosis) was similar across age categories. Conclusions The high burden of cough is independent of children’s age and etiology but dependent on clinical setting. Irrespective of setting and age, children with chronic cough should be carefully evaluated and child-specific evidence-based algorithms used.
Resumo:
This paper investigates how Enterprise Architecture (EA) evolves due to emerging trends. It specifically explores how EA integrates the Service-oriented Architecture (SOA). Archer’s Morphogenetic theory is used as an analytical approach to distinguish the architectural conditions under which SOA is introduced, to study the relationships between these conditions and SOA introduction, and to reflect on EA evolution (elaborations) that then take place. The paper focuses on reasons for why EA evolution could take place, or not and what architectural changes could happen due to SOA integration. The research builds on sound theoretical foundations to discuss EA evolution in a field that often lacks a solid theoretical groundwork. Specifically, it proposes that critical realism, using the morphogenetic theory, can provide a useful theoretical foundation to study enterprise architecture (EA) evolution. The initial results of a literature review (a-priori model) were extended using explorative interviews. The findings of this study are threefold. First, there are five different levels of EA-SOA integration outcomes. Second, a mature EA, flexible and well-defined EA framework and comprehensive objectives of EA improve the integration outcomes. Third, the analytical separation using Archer’s theory is helpful in order to understand how these different integration outcomes are generated.
Resumo:
As Business Process Management (BPM) is evolving and organisations are becoming more process oriented, the need for Expertise in BPM amongst practitioners has increased. Proactively managing Expertise in BPM is essential to unlock the potential of BPM as a management paradigm and competitive advantage. Whilst great attention is being paid by the BPM community to the technological aspects of BPM, relatively little research or work has been done concerning the expertise aspect of BPM. There is a substantial body of knowledge on expertise itself, however there is no common framework in existence at the time of writing, describing the fundamental attributes characterising Expertise in the illustrative context of BPM. There are direct implications of the understanding and characterisation of Expertise in the context of BPM as a key strategic component and success factor of BPM itself, as well as for those involved in BPM. Expertise in the context of BPM needs to be characterised to understand it, and be able to proactively manage it. Given the relative infancy of research into Expertise in the context of BPM, an exploration of the relevance and importance of Expertise in the context of BPM was considered essential, to ensure the study itself was of value to the BPM field. The aims of this research are firstly to address the two research questions 'why is expertise important and relevant in the context of BPM?', and 'how can Expertise in the context of BPM be characterised?', and secondly, the development of a comprehensive and validated A-priori model characterising Expertise in the illustrative context of BPM. The study is theory-guided. It has been undertaken via an extensive literature review across relevant literature domains, and a revelatory case study utilising several methods: informal discussions, an open-ended survey, and participant observation. An a-priori model was then developed which comprised of several Constructs and Sub-constructs, and several overall aspects of Expertise in BPM. This was followed by the conduct of interviews in the validation phase of the revelatory case study. The primary contributions of this study are to the fields of expertise, BPM and research. Contributions to the field of expertise include a comprehensive review of expertise literature in general and synthesised critique on expertise research, characterisation of expertise in an illustrative context as a system, and a comprehensive narrative of the dynamics and interrelationships of the core attributes characterising expertise. Contributions to the field of BPM include firstly, the establishment of the importance of understanding Expertise in the context of BPM, including a comprehensive overview of the role the relevance and importance of Expertise in the context of BPM, through explanation of the effect of Expertise in BPM. Secondly, a model characterising Expertise in the context of BPM, which can be used by BPM practitioners to clearly articulate and illuminate the state of Expertise in BPM in organisations. Contributions to the field of research include an extended view of Systems Theory developed, reflecting the importance of the system context in systems thinking, and a narrative on ontological innovation through the positioning of ontology as a meta-model of Expertise in the context of BPM.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users for targeted surveys and various operational and strategic planning improvements. However, the existing market segmentation studies in the literature have been generally done using passenger surveys, which have various limitations. The smart card (SC) data from an automated fare collection system facilitate the understanding of the multiday travel pattern of transit passengers and can be used to segment them into identifiable types of similar behaviors and needs. This paper proposes a comprehensive methodology for passenger segmentation solely using SC data. After reconstructing the travel itineraries from SC transactions, this paper adopts the density-based spatial clustering of application with noise (DBSCAN) algorithm to mine the travel pattern of each SC user. An a priori market segmentation approach then segments transit passengers into four identifiable types. The methodology proposed in this paper assists transit operators to understand their passengers and provides them oriented information and services.
Resumo:
Circular shortest paths represent a powerful methodology for image segmentation. The circularity condition ensures that the contour found by the algorithm is closed, a natural requirement for regular objects. Several implementations have been proposed in the past that either promise closure with high probability or ensure closure strictly, but with a mild computational efficiency handicap. Circularity can be viewed as a priori information that helps recover the correct object contour. Our "observation" is that circularity is only one among many possible constraints that can be imposed on shortest paths to guide them to a desirable solution. In this contribution, we illustrate this opportunity under a volume constraint but the concept is generally applicable. We also describe several adornments to the circular shortest path algorithm that proved useful in applications. © 2011 IEEE.
Resumo:
People with schizophrenia perform poorly when recognising facial expressions of emotion, particularly negative emotions such as fear. This finding has been taken as evidence of a “negative emotion specific deficit”, putatively associated with a dysfunction in the limbic system, particularly the amygdala. An alternative explanation is that greater difficulty in recognising negative emotions may reflect a priori differences in task difficulty. The present study uses a differential deficit design to test the above argument. Facial emotion recognition accuracy for seven emotion categories was compared across three groups. Eighteen schizophrenia patients and one group of healthy age- and gender-matched controls viewed identical sets of stimuli. A second group of 18 age- and gender-matched controls viewed a degraded version of the same stimuli. The level of stimulus degradation was chosen so as to equate overall level of accuracy to the schizophrenia patients. Both the schizophrenia group and the degraded image control group showed reduced overall recognition accuracy and reduced recognition accuracy for fearful and sad facial stimuli compared with the intact-image control group. There were no differences in recognition accuracy for any emotion category between the schizophrenia group and the degraded image control group. These findings argue against a negative emotion specific deficit in schizophrenia.
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
The changes to the R&D tax concession in 2011 were touted as the biggest reform to business innovation policy in over a decade. Three years later, as part of the 2014 Federal Budget, a reduction in the concession rates was announced. While the most recent of the pro-posed changes are designed to align with the reduction in company tax rate, the Australian Federal Government also indicated that the gain to revenue from the reduction in the incentive scheme will be redirected by the Government to repair the Budget and fund policy priori-ties. The consequence is that the R&D concessions, while designed to encourage innovation, are clearly linked with the tax system. As such, the first part of this article considers whether the R&D concession is a changing tax for changing times. Leading on from part one, this article also addresses a second question of ‘what’s tax got to do with it’? To answer this question, the article argues that, rather than ever being substantive tax reform, the constantly changing measures simply alter the criteria and means by which companies become eligible for a Federal Government subsidy for qualifying R&D activity, whatever that amount is. It further argues that when considered as part of the broader innovation agenda, all R&D tax concessions should be evaluated as a government spending program in the same way as any direct spending on innovation. When this is done, the tax regime is arguably merely the administrative policy instrument by which the subsidy is delivered. However, this may not be best practice to distribute those funds fairly, efficiently, and without distortion, while at the same time maintaining adequate government control and accountability. Finally, in answering the question of ‘what’s tax got to do with it?’ the article concludes that the answer is: very little.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
In How to Do Things with Words, Austin (1975) described marriages, sentencings and ship launchings as prototypes of performative utterance. What’s the appropriate speech act for launching an academic journal? First editions of journals tend to take a field as formed a priori, as having “come of age”, and state good intents to capture its best or most innovative work.
Resumo:
Driving on an approach to a signalized intersection while distracted is relatively risky, as potential vehicular conflicts and resulting angle collisions tend to be relatively more severe compared to other locations. Given the prevalence and importance of this particular scenario, the objective of this study was to examine the decisions and actions of distracted drivers during the onset of yellow lights. Driving simulator data were obtained from a sample of 69 drivers under baseline and handheld cell phone conditions at the University of Iowa – National Advanced Driving Simulator. Explanatory variables included age, gender, cell phone use, distance to stop-line, and speed. Although there is extensive research on drivers’ responses to yellow traffic signals, the examinations have been conducted from a traditional regression-based approach, which do not necessary provide the underlying relations and patterns among the sampled data. In this paper, we exploit the benefits of both classical statistical inference and data mining techniques to identify the a priori relationships among main effects, non-linearities, and interaction effects. Results suggest that the probability of yellow light running increases with the increase in driving speed at the onset of yellow. Both young (18–25 years) and middle-aged (30–45 years) drivers reveal reduced propensity for yellow light running whilst distracted across the entire speed range, exhibiting possible risk compensation during this critical driving situation. The propensity for yellow light running for both distracted male and female older (50–60 years) drivers is significantly higher. Driver experience captured by age interacts with distraction, resulting in their combined effect having slower physiological response and being distracted particularly risky.
Resumo:
Modularity has been suggested to be connected to evolvability because a higher degree of independence among parts allows them to evolve as separate units. Recently, the Escoufier RV coefficient has been proposed as a measure of the degree of integration between modules in multivariate morphometric datasets. However, it has been shown, using randomly simulated datasets, that the value of the RV coefficient depends on sample size. Also, so far there is no statistical test for the difference in the RV coefficient between a priori defined groups of observations. Here, we (1), using a rarefaction analysis, show that the value of the RV coefficient depends on sample size also in real geometric morphometric datasets; (2) propose a permutation procedure to test for the difference in the RV coefficient between a priori defined groups of observations; (3) show, through simulations, that such a permutation procedure has an appropriate Type I error; (4) suggest that a rarefaction procedure could be used to obtain sample-size-corrected values of the RV coefficient; and (5) propose a nearest-neighbor procedure that could be used when studying the variation of modularity in geographic space. The approaches outlined here, readily extendable to non-morphometric datasets, allow study of the variation in the degree of integration between a priori defined modules. A Java application – that will allow performance of the proposed test using a software with graphical user interface – has also been developed and is available at the Morphometrics at Stony Brook Web page (http://life.bio.sunysb.edu/morph/).
Resumo:
Background Bloodstream infections resulting from intravascular catheters (catheter-BSI) in critical care increase patients' length of stay, morbidity and mortality, and the management of these infections and their complications has been estimated to cost the NHS annually £19.1–36.2M. Catheter-BSI are thought to be largely preventable using educational interventions, but guidance as to which types of intervention might be most clinically effective is lacking. Objective To assess the effectiveness and cost-effectiveness of educational interventions for preventing catheter-BSI in critical care units in England. Data sources Sixteen electronic bibliographic databases – including MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, Cumulative Index to Nursing and Allied Health Literature (CINAHL), NHS Economic Evaluation Database (NHS EED), EMBASE and The Cochrane Library databases – were searched from database inception to February 2011, with searches updated in March 2012. Bibliographies of systematic reviews and related papers were screened and experts contacted to identify any additional references. Review methods References were screened independently by two reviewers using a priori selection criteria. A descriptive map was created to summarise the characteristics of relevant studies. Further selection criteria developed in consultation with the project Advisory Group were used to prioritise a subset of studies relevant to NHS practice and policy for systematic review. A decision-analytic economic model was developed to investigate the cost-effectiveness of educational interventions for preventing catheter-BSI. Results Seventy-four studies were included in the descriptive map, of which 24 were prioritised for systematic review. Studies have predominantly been conducted in the USA, using single-cohort before-and-after study designs. Diverse types of educational intervention appear effective at reducing the incidence density of catheter-BSI (risk ratios statistically significantly < 1.0), but single lectures were not effective. The economic model showed that implementing an educational intervention in critical care units in England would be cost-effective and potentially cost-saving, with incremental cost-effectiveness ratios under worst-case sensitivity analyses of < £5000/quality-adjusted life-year. Limitations Low-quality primary studies cannot definitively prove that the planned interventions were responsible for observed changes in catheter-BSI incidence. Poor reporting gave unclear estimates of risk of bias. Some model parameters were sourced from other locations owing to a lack of UK data. Conclusions Our results suggest that it would be cost-effective and may be cost-saving for the NHS to implement educational interventions in critical care units. However, more robust primary studies are needed to exclude the possible influence of secular trends on observed reductions in catheter-BSI.