903 resultados para Data-driven knowledge acquisition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Continuous advancements in technology have led to increasingly comprehensive and distributed product development processes while in pursuit of improved products at reduced costs. Information associated with these products is ever changing, and structured frameworks have become integral to managing such fluid information. Ontologies and the Semantic Web have emerged as key alternatives for capturing product knowledge in both a human-readable and computable manner. The primary and conclusive focus of this research is to characterize relationships formed within methodically developed distributed design knowledge frameworks to ultimately provide a pervasive real-time awareness in distributed design processes. Utilizing formal logics in the form of the Semantic Web’s OWL and SWRL, causal relationships are expressed to guide and facilitate knowledge acquisition as well as identify contradictions between knowledge in a knowledge base. To improve the efficiency during both the development and operational phases of these “intelligent” frameworks, a semantic relatedness algorithm is designed specifically to identify and rank underlying relationships within product development processes. After reviewing several semantic relatedness measures, three techniques, including a novel meronomic technique, are combined to create AIERO, the Algorithm for Identifying Engineering Relationships in Ontologies. In determining its applicability and accuracy, AIERO was applied to three separate, independently developed ontologies. The results indicate AIERO is capable of consistently returning relatedness values one would intuitively expect. To assess the effectiveness of AIERO in exposing underlying causal relationships across product development platforms, a case study involving the development of an industry-inspired printed circuit board (PCB) is presented. After instantiating the PCB knowledge base and developing an initial set of rules, FIDOE, the Framework for Intelligent Distributed Ontologies in Engineering, was employed to identify additional causal relationships through extensional relatedness measurements. In a conclusive PCB redesign, the resulting “intelligent” framework demonstrates its ability to pass values between instances, identify inconsistencies amongst instantiated knowledge, and identify conflicting values within product development frameworks. The results highlight how the introduced semantic methods can enhance the current knowledge acquisition, knowledge management, and knowledge validation capabilities of traditional knowledge bases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The field of library assessment continues to grow. The annual Library Assessment Trends Report provides a brief synopsis of the more important trends in library assessment. It is hoped these brief reports will facilitate the Dean of the Library’s understanding of assessment trends. These reports provide information that supports data driven decisions. Additionally, the reports are an outreach method that supports a greater institutional understanding of library assessment. Library assessment supports strategic planning, improved processes, and a greater understanding of our users’ needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Similar to other health care processes, referrals are susceptible to breakdowns. These breakdowns in the referral process can lead to poor continuity of care, slow diagnostic processes, delays and repetition of tests, patient and provider dissatisfaction, and can lead to a loss of confidence in providers. These facts and the necessity for a deeper understanding of referrals in healthcare served as the motivation to conduct a comprehensive study of referrals. The research began with the real problem and need to understand referral communication as a mean to improve patient care. Despite previous efforts to explain referrals and the dynamics and interrelations of the variables that influence referrals there is not a common, contemporary, and accepted definition of what a referral is in the health care context. The research agenda was guided by the need to explore referrals as an abstract concept by: 1) developing a conceptual definition of referrals, and 2) developing a model of referrals, to finally propose a 3) comprehensive research framework. This dissertation has resulted in a standard conceptual definition of referrals and a model of referrals. In addition a mixed-method framework to evaluate referrals was proposed, and finally a data driven model was developed to predict whether a referral would be approved or denied by a specialty service. The three manuscripts included in this dissertation present the basis for studying and assessing referrals using a common framework that should allow an easier comparative research agenda to improve referrals taking into account the context where referrals occur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methane is an important greenhouse gas, responsible for about 20 of the warming induced by long-lived greenhouse gases since pre-industrial times. By reacting with hydroxyl radicals, methane reduces the oxidizing capacity of the atmosphere and generates ozone in the troposphere. Although most sources and sinks of methane have been identified, their relative contributions to atmospheric methane levels are highly uncertain. As such, the factors responsible for the observed stabilization of atmospheric methane levels in the early 2000s, and the renewed rise after 2006, remain unclear. Here, we construct decadal budgets for methane sources and sinks between 1980 and 2010, using a combination of atmospheric measurements and results from chemical transport models, ecosystem models, climate chemistry models and inventories of anthropogenic emissions. The resultant budgets suggest that data-driven approaches and ecosystem models overestimate total natural emissions. We build three contrasting emission scenarios � which differ in fossil fuel and microbial emissions � to explain the decadal variability in atmospheric methane levels detected, here and in previous studies, since 1985. Although uncertainties in emission trends do not allow definitive conclusions to be drawn, we show that the observed stabilization of methane levels between 1999 and 2006 can potentially be explained by decreasing-to-stable fossil fuel emissions, combined with stable-to-increasing microbial emissions. We show that a rise in natural wetland emissions and fossil fuel emissions probably accounts for the renewed increase in global methane levels after 2006, although the relative contribution of these two sources remains uncertain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Despite effective solutions to reduce teen birth rates, Texas teen birth rates are among the highest in the nation. School districts can impact youth sexual behavior through implementation of evidence-based programs (EBPs); however, teen pregnancy prevention is a complex and controversial issue for school districts. Subsequently, very few districts in Texas implement EBPs for pregnancy prevention. Additionally, school districts receive little guidance on the process for finding, adopting, and implementing EBPs. Purpose: The purpose of this report is to present the CHoosing And Maintaining Programs for Sex education in Schools (CHAMPSS) Model, a practical and realistic framework to help districts find, adopt, and implement EBPs. Methods: Model development occurred in four phases using the core processes of Intervention Mapping: 1) knowledge acquisition, 2) knowledge engineering, 3) model representation, and 4) knowledge development. Results: The CHAMPSS Model provides seven steps, tailored for school-based settings, which encompass phases of assessment, preparation, implementation, and maintenance: Prioritize, Asses, Select, Approve, Prepare, Implement, and Maintain. Advocacy and eliciting support for adolescent sexual health are also core elements of the model. Conclusion: This systematic framework may help schools increase adoption, implementation, and maintenance for EBPs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic systems, especially in real-life applications, are often determined by inter-/intra-variability, uncertainties and time-varying components. Physiological systems are probably the most representative example in which population variability, vital signal measurement noise and uncertain dynamics render their explicit representation and optimization a rather difficult task. Systems characterized by such challenges often require the use of adaptive algorithmic solutions able to perform an iterative structural and/or parametrical update process towards optimized behavior. Adaptive optimization presents the advantages of (i) individualization through learning of basic system characteristics, (ii) ability to follow time-varying dynamics and (iii) low computational cost. In this chapter, the use of online adaptive algorithms is investigated in two basic research areas related to diabetes management: (i) real-time glucose regulation and (ii) real-time prediction of hypo-/hyperglycemia. The applicability of these methods is illustrated through the design and development of an adaptive glucose control algorithm based on reinforcement learning and optimal control and an adaptive, personalized early-warning system for the recognition and alarm generation against hypo- and hyperglycemic events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper argues for a distinction between sensory-and conceptual-information storage in the human information-processing system. Conceptual information is characterized as meaningful and symbolic, while sensory information may exist in modality-bound form. Furthermore, it is assumed that sensory information does not contribute to conscious remembering and can be used only in data-driven process reptitions, which can be accompanied by a kind of vague or intuitive feeling. Accordingly, pure top-down and willingly controlled processing, such as free recall, should not have any access to sensory data. Empirical results from different research areas and from two experiments conducted by the authors are presented in this article to support these theoretical distinctions. The experiments were designed to separate a sensory-motor and a conceptual component in memory for two-digit numbers and two-letter items, when parts of the numbers or items were imaged or drawn on a tablet. The results of free recall and recognition are discussed in a theoretical framework which distinguishes sensory and conceptual information in memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ATLAS experiment at the LHC has measured the production cross section of events with two isolated photons in the final state, in proton-proton collisions at root s = 7 TeV. The full data set collected in 2011, corresponding to an integrated luminosity of 4.9 fb(-1), is used. The amount of background, from hadronic jets and isolated electrons, is estimated with data-driven techniques and subtracted. The total cross section, for two isolated photons with transverse energies above 25 GeV and 22 GeV respectively, in the acceptance of the electromagnetic calorimeter (vertical bar eta vertical bar < 1.37 and 1.52 < vertical bar eta vertical bar 2.37) and with an angular separation Delta R > 0.4, is 44.0(-4.2)(+3.2) pb. The differential cross sections as a function of the di-photon invariant mass, transverse momentum, azimuthal separation, and cosine of the polar angle of the largest transverse energy photon in the Collins-Soper di-photon rest frame are also measured. The results are compared to the prediction of leading-order parton-shower and next-to-leading-order and next-to-next-to-leading-order parton-level generators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While most healthy elderly are able to manage their everyday activities, studies showed that there are both stable and declining abilities during healthy aging. For example, there is evidence that semantic memory processes which involve controlled retrieval mechanism decrease, whereas the automatic functioning of the semantic network remains intact. In contrast, patients with Alzheimer’s disease (AD) suffer from episodic and semantic memory impairments aggravating their daily functioning. In AD, severe episodic as well as semantic memory deficits are observable. While the hallmark symptom of episodic memory decline in AD is well investigated, the underlying mechanisms of semantic memory deterioration remain unclear. By disentangling the semantic memory impairments in AD, the present thesis aimed to improve early diagnosis and to find a biomarker for dementia. To this end, a study on healthy aging and a study with dementia patients were conducted investigating automatic and controlled semantic word retrieval. Besides the inclusion of AD patients, a group of participants diagnosed with semantic dementia (SD) – showing isolated semantic memory loss – was assessed. Automatic and controlled semantic word retrieval was measured with standard neuropsychological tests and by means of event-related potentials (ERP) recorded during the performance of a semantic priming (SP) paradigm. Special focus was directed to the N400 or N400-LPC (late positive component) complex, an ERP that is sensitive to the semantic word retrieval. In both studies, data driven topographical analyses were applied. Furthermore, in the patient study, the combination of the individual baseline cerebral blood flow (CBF) with the N400 topography of each participant was employed in order to relate altered functional electrophysiology to the pathophysiology of dementia. Results of the aging study revealed that the automatic semantic word retrieval remains stable during healthy aging, the N400-LPC complex showed a comparable topography in contrast to the young participants. Both patient groups showed automatic SP to some extent, but strikingly the ERP topographies were altered compared to healthy controls. Most importantly, the N400 was identified as a putative marker for dementia. In particular, the degree of the topographical N400 similarity was demonstrated to separate healthy elderly from demented patients. Furthermore, the marker was significantly related to baseline CBF reduction in brain areas relevant for semantic word retrieval. Summing up, the first major finding of the present thesis was that all groups showed semantic priming, but that the N400 topography differed significantly between healthy and demented elderly. The second major contribution was the identification of the N400 similarity as a putative marker for dementia. To conclude, the present thesis added evidence of preserved automatic processing during healthy aging. Moreover, a possible marker which might contribute to an improved diagnosis and lead consequently to a more effective treatment of dementia was presented and has to be further developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The largest uncertainties in the Standard Model calculation of the anomalous magnetic moment of the muon (ɡ − 2)μ come from hadronic contributions. In particular, it can be expected that in a few years the subleading hadronic light-by-light (HLbL) contribution will dominate the theory uncertainty. We present a dispersive description of the HLbL tensor. This new, model-independent approach opens up an avenue towards a data-driven determination of the HLbL contribution to the (ɡ − 2)μ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we make a further step towards a dispersive description of the hadronic light-by-light (HLbL) tensor, which should ultimately lead to a data-driven evaluation of its contribution to (g − 2) μ . We first provide a Lorentz decomposition of the HLbL tensor performed according to the general recipe by Bardeen, Tung, and Tarrach, generalizing and extending our previous approach, which was constructed in terms of a basis of helicity amplitudes. Such a tensor decomposition has several advantages: the role of gauge invariance and crossing symmetry becomes fully transparent; the scalar coefficient functions are free of kinematic singularities and zeros, and thus fulfill a Mandelstam double-dispersive representation; and the explicit relation for the HLbL contribution to (g − 2) μ in terms of the coefficient functions simplifies substantially. We demonstrate explicitly that the dispersive approach defines both the pion-pole and the pion-loop contribution unambiguously and in a model-independent way. The pion loop, dispersively defined as pion-box topology, is proven to coincide exactly with the one-loop scalar QED amplitude, multiplied by the appropriate pion vector form factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The largest uncertainties in the Standard Model calculation of the anomalous magnetic moment of the muon (g − 2)μ come from hadronic contributions. In particular, it can be expected that in a few years the subleading hadronic light-by-light (HLbL) contribution will dominate the theory uncertainty. We present a dispersive description of the HLbL tensor, which is based on unitarity, analyticity, crossing symmetry, and gauge invariance. Such a model-independent Approach opens up an avenue towards a data-driven determination of the HLbL contribution to the (g − 2)μ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has recently been reported in this journal that local fat depots produce a sizable frequency-dependent signal attenuation in magnetic resonance spectroscopy (MRS) of the brain. If of a general nature, this effect would question the use of internal reference signals for quantification of MRS and the quantitative use of MRS as a whole. Here, it was attempted to verify this effect and pinpoint the potential causes by acquiring data with various acquisition settings, including two field strengths, two MR scanners from different vendors, different water suppression sequences, RF coils, localization sequences, echo times, and lipid/metabolite phantoms. With all settings tested, the reported effect could not be reproduced, and it is concluded that water referencing and quantitative MRS per se remain valid tools under common acquisition conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Eye Epidemiology (E3) consortium is a recently formed consortium of 29 groups from 12 European countries. It already comprises 21 population-based studies and 20 other studies (case-control, cases only, randomized trials), providing ophthalmological data on approximately 170,000 European participants. The aim of the consortium is to promote and sustain collaboration and sharing of data and knowledge in the field of ophthalmic epidemiology in Europe, with particular focus on the harmonization of methods for future research, estimation and projection of frequency and impact of visual outcomes in European populations (including temporal trends and European subregions), identification of risk factors and pathways for eye diseases (lifestyle, vascular and metabolic factors, genetics, epigenetics and biomarkers) and development and validation of prediction models for eye diseases. Coordinating these existing data will allow a detailed study of the risk factors and consequences of eye diseases and visual impairment, including study of international geographical variation which is not possible in individual studies. It is expected that collaborative work on these existing data will provide additional knowledge, despite the fact that the risk factors and the methods for collecting them differ somewhat among the participating studies. Most studies also include biobanks of various biological samples, which will enable identification of biomarkers to detect and predict occurrence and progression of eye diseases. This article outlines the rationale of the consortium, its design and presents a summary of the methodology.