824 resultados para decentralised data fusion framework
Resumo:
We present the data assimilation approach, which provides a framework for combining observations and model simulations of the climate system, and has led to a new field of applications for paleoclimatology. The three subsequent articles explore specific applications in more detail.
Resumo:
INTRODUCTION The aims of this study were to compare lateral cephalograms with other radiologic methods for diagnosing suspected fusions of the cervical spine and to validate the assessment of congenital fusions and osteoarthritic changes against the anatomic truth. METHODS Four cadaver heads were selected with fusion of vertebrae C2 and C3 seen on a lateral cephalogram. Multidetector computed tomography (MDCT) and cone-beam computed tomography (CBCT) were performed and assessed by 5 general radiologists and 5 oral radiologists, respectively. Vertebrae C2 and C3 were examined for osseous fusions, and the left and right facet joints were diagnosed for osteoarthritis. Subsequently, the C2 and C3 were macerated and appraised by a pathologist. Descriptive analysis was performed, and interrater agreements between and within the groups were computed. RESULTS All macerated specimens showed osteoarthritic findings of varying degrees, but no congenital bony fusion. All observers agreed that no fusion was found on MDCT or CBCT. They disagreed on the prevalence of osteoarthritic deformities (general radiologists/MDCT, 100%; oral radiologists/CBCT, 93.3%) and joint space assessment in the facet joints (kappa = 0.452). The agreement within the rater groups differed considerably (general radiologists/MDCT, kappa = 0.612; oral radiologists/CBCT, kappa = 0.240). CONCLUSIONS Lateral cephalograms do not provide dependable data to assess the cervical spine for fusions and cause false-positive detections. Both MDCT interpreted by general radiologists and CBCT interpreted by oral radiologists are reliable methods to exclude potential fusions. Degenerative osteoarthritic changes are diagnosed more accurately and consistently by general radiologists evaluating MDCT.
Resumo:
In this paper, we investigate content-centric data transmission in the context of short opportunistic contacts and base our work on an existing content-centric networking architecture. In case of short interconnection times, file transfers may not be completed and the received information is discarded. Caches in content-centric networks are used for short-term storage and do not guarantee persistence. We implemented a mechanism to extend caching on persistent storage enabling the completion of disrupted content transfers. The mechanisms have been implemented in the CCNx framework and have been evaluated on wireless mesh nodes. Our evaluations using multicast and unicast communication show that the implementation can support content transfers in opportunistic environments without significant processing and storing overhead.
Resumo:
It is unknown how receptor binding by the paramyxovirus attachment proteins (HN, H, or G) triggers the fusion (F) protein to fuse with the plasma membrane for cell entry. H-proteins of the morbillivirus genus consist of a stalk ectodomain supporting a cuboidal head; physiological oligomers consist of non-covalent dimer-of-dimers. We report here the successful engineering of intermolecular disulfide bonds within the central region (residues 91-115) of the morbillivirus H-stalk; a sub-domain that also encompasses the putative F-contacting section (residues 111-118). Remarkably, several intersubunit crosslinks abrogated membrane fusion, but bioactivity was restored under reducing conditions. This phenotype extended equally to H proteins derived from virulent and attenuated morbillivirus strains and was independent of the nature of the contacted receptor. Our data reveal that the morbillivirus H-stalk domain is composed of four tightly-packed subunits. Upon receptor binding, these subunits structurally rearrange, possibly inducing conformational changes within the central region of the stalk, which, in turn, promote fusion. Given that the fundamental architecture appears conserved among paramyxovirus attachment protein stalk domains, we predict that these motions may act as a universal paramyxovirus F-triggering mechanism.
Resumo:
Historical, i.e. pre-1957, upper-air data are a valuable source of information on the state of the atmosphere, in some parts of the world dating back to the early 20th century. However, to date, reanalyses have only partially made use of these data, and only of observations made after 1948. Even for the period between 1948 (the starting year of the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis) and the International Geophysical Year in 1957 (the starting year of the ERA-40 reanalysis), when the global upper-air coverage reached more or less its current status, many observations have not yet been digitised. The Comprehensive Historical Upper-Air Network (CHUAN) already compiled a large collection of pre-1957 upper-air data. In the framework of the European project ERA-CLIM (European Reanalysis of Global Climate Observations), significant amounts of additional upper-air data have been catalogued (> 1.3 million station days), imaged (> 200 000 images) and digitised (> 700 000 station days) in order to prepare a new input data set for upcoming reanalyses. The records cover large parts of the globe, focussing on, so far, less well covered regions such as the tropics, the polar regions and the oceans, and on very early upper-air data from Europe and the US. The total number of digitised/inventoried records is 61/101 for moving upper-air data, i.e. data from ships, etc., and 735/1783 for fixed upper-air stations. Here, we give a detailed description of the resulting data set including the metadata and the quality checking procedures applied. The data will be included in the next version of CHUAN. The data are available at doi:10.1594/PANGAEA.821222
Resumo:
Fusion toxins used for cancer-related therapy have demonstrated short circulation half-lives, which impairs tumor localization and, hence, efficacy. Here, we demonstrate that the pharmacokinetics of a fusion toxin composed of a designed ankyrin repeat protein (DARPin) and domain I–truncated Pseudomonas Exotoxin A (PE40/ETA″) can be significantly improved by facile bioorthogonal conjugation with a polyethylene glycol (PEG) polymer at a unique position. Fusion of the anti-EpCAM DARPin Ec1 to ETA″ and expression in methionine-auxotrophic E. coli enabled introduction of the nonnatural amino acid azidohomoalanine (Aha) at position 1 for strain-promoted click PEGylation. PEGylated Ec1-ETA″ was characterized by detailed biochemical analysis, and its potential for tumor targeting was assessed using carcinoma cell lines of various histotypes in vitro, and subcutaneous and orthotopic tumor xenografts in vivo. The mild click reaction resulted in a well-defined mono-PEGylated product, which could be readily purified to homogeneity. Despite an increased hydrodynamic radius resulting from the polymer, the fusion toxin demonstrated high EpCAM-binding activity and retained cytotoxicity in the femtomolar range. Pharmacologic analysis in mice unveiled an almost 6-fold increase in the elimination half-life (14 vs. 82 minutes) and a more than 7-fold increase in the area under the curve (AUC) compared with non-PEGylated Ec1-ETA″, which directly translated in increased and longer-lasting effects on established tumor xenografts. Our data underline the great potential of combining the inherent advantages of the DARPin format with bioorthogonal click chemistry to overcome the limitations of engineering fusion toxins with enhanced efficacy for cancer-related therapy.
Resumo:
This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.
Resumo:
Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.
Resumo:
Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. This dissertation can be split into three parts: In the first part, possible fuzzy clustering applications for the Social Semantic Web are investigated. The second part explores promising Social Semantic Web elements for organizational applications,while in the third part the former two parts are brought together and a fuzzy online reputation analysis framework is introduced and evaluated. Theentire PhD thesis is based on literature reviews as well as on argumentative-deductive analyses.The possible applications of Social Semantic Web elements within organizations have been researched using a scenario and an additional case study together with two ancillary case studies—based on qualitative interviews. For the conception and implementation of the online reputation analysis application, a conceptual framework was developed. Employing test installations and prototyping, the essential parts of the framework have been implemented.By following a design sciences research approach, this PhD has created two artifacts: a frameworkand a prototype as proof of concept. Bothartifactshinge on twocoreelements: a (cluster analysis-based) translation of tags used in the Social Web to a computer-understandable fuzzy grassroots ontology for the Semantic Web, and a (Topic Maps-based) knowledge representation system, which facilitates a natural interaction with the fuzzy grassroots ontology. This is beneficial to the identification of unknown but essential Web data that could not be realized through conventional online reputation analysis. Theinherent structure of natural language supports humans not only in communication but also in the perception of the world. Fuzziness is a promising tool for transforming those human perceptions intocomputer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management.
Resumo:
This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data (such as location data, ontology-backed search queries, in- and outdoor conditions) the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Dramatic advances in developmental sciences are beginning to reveal the biological mechanisms underlying well-established associations between early childhood adversity and lifelong measures of limited productivity and poor health. The case studies by Chilton and Rabinowich provide poignant and compelling qualitative data that support an ecobiodevelopmental approach towards understanding and addressing both the complex causes and intergenerational consequences of food insecurity.
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
An Early-Warning System for Hypo-/Hyperglycemic Events Based on Fusion of Adaptive Prediction Models
Resumo:
Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.
Resumo:
Studies of the spin and parity quantum numbers of the Higgs boson are presented, based on protonproton collision data collected by the ATLAS experiment at the LHC. The Standard Model spin-parity J(P) = 0(+) hypothesis is compared with alternative hypotheses using the Higgs boson decays H -> gamma gamma, H -> ZZ* -> 4l and H -> WW* -> l nu l nu, as well as the combination of these channels. The analysed dataset corresponds to an integrated luminosity of 20.7 fb(-1) collected at a centre-of-mass energy of root s = 8 TeV. For the H -> ZZ* -> 4l decay mode the dataset corresponding to an integrated luminosity of 4.6 fb(-1) collected at root s = 7 TeV is included. The data are compatible with the Standard Model J(P) = 0+ quantum numbers for the Higgs boson, whereas all alternative hypotheses studied in this Letter, namely some specific J(P) = 0(-), 1(+), 1(-), 2(+) models, are excluded at confidence levels above 97.8%. This exclusion holds independently of the assumptions on the coupling strengths to the Standard Model particles and in the case of the J(P) = 2(+) model, of the relative fractions of gluon-fusion and quark-antiquark production of the spin-2 particle. The data thus provide evidence for the spin-0 nature of the Higgs boson, with positive parity being strongly preferred.