976 resultados para Unified Model Reference


Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the technology progess, embedded systems using adaptive techniques are being used frequently. One of these techniques is the Variable Structure Model- Reference Adaptive Control (VS-MRAC). The implementation of this technique in embedded systems, requires consideration of a sampling period which if not taken into consideration, can adversely affect system performance and even takes the system to instability. This work proposes a stability analysis of a discrete-time VS-MRAC accomplished for SISO linear time-invariant plants with relative degree one. The aim is to analyse the in uence of the sampling period in the system performance and the relation of this period with the chattering and system instability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently more than half of Electronic Health Record (EHR) projects fail. Most of these failures are not due to flawed technology, but rather due to the lack of systematic considerations of human issues. Among the barriers for EHR adoption, function mismatching among users, activities, and systems is a major area that has not been systematically addressed from a human-centered perspective. A theoretical framework called Functional Framework was developed for identifying and reducing functional discrepancies among users, activities, and systems. The Functional Framework is composed of three models – the User Model, the Designer Model, and the Activity Model. The User Model was developed by conducting a survey (N = 32) that identified the functions needed and desired from the user’s perspective. The Designer Model was developed by conducting a systemic review of an Electronic Dental Record (EDR) and its functions. The Activity Model was developed using an ethnographic method called shadowing where EDR users (5 dentists, 5 dental assistants, 5 administrative personnel) were followed quietly and observed for their activities. These three models were combined to form a unified model. From the unified model the work domain ontology was developed by asking users to rate the functions (a total of 190 functions) in the unified model along the dimensions of frequency and criticality in a survey. The functional discrepancies, as indicated by the regions of the Venn diagrams formed by the three models, were consistent with the survey results, especially with user satisfaction. The survey for the Functional Framework indicated the preference of one system over the other (R=0.895). The results of this project showed that the Functional Framework provides a systematic method for identifying, evaluating, and reducing functional discrepancies among users, systems, and activities. Limitations and generalizability of the Functional Framework were discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The β2 adrenergic receptor (β2AR) regulates smooth muscle relaxation in the vasculature and airways. Long- and Short-acting β-agonists (LABAs/SABAs) are widely used in treatment of chronic obstructive pulmonary disorder (COPD) and asthma. Despite their widespread clinical use we do not understand well the dominant β2AR regulatory pathways that are stimulated during therapy and bring about tachyphylaxis, which is the loss of drug effects. Thus, an understanding of how the β2AR responds to various β-agonists is crucial to their rational use. Towards that end we have developed deterministic models that explore the mechanism of drug- induced β2AR regulation. These mathematical models can be classified into three classes; (i) Six quantitative models of SABA-induced G protein coupled receptor kinase (GRK)-mediated β2AR regulation; (ii) Three phenomenological models of salmeterol (a LABA)-induced GRK-mediated β2AR regulation; and (iii) One semi-quantitative, unified model of SABA-induced GRK-, protein kinase A (PKA)-, and phosphodiesterase (PDE)-mediated regulation of β2AR signalling. The various models were constrained with all or some of the following experimental data; (i) GRK-mediated β2AR phosphorylation in response to various LABAs/SABAs; (ii) dephosphorylation of the GRK site on the β2AR; (iii) β2AR internalisation; (iv) β2AR recycling; (v) β2AR desensitisation; (vi) β2AR resensitisation; (vii) PKA-mediated β2AR phosphorylation in response to a SABA; and (viii) LABA/SABA induced cAMP profile ± PDE inhibitors. The models of GRK-mediated β2AR regulation show that plasma membrane dephosphorylation and recycling of the phosphorylated β2AR are required to reconcile with the measured dephosphorylation kinetics. We further used a consensus model to predict the consequences of rapid pulsatile agonist stimulation and found that although resensitisation was rapid, the β2AR system retained the memory of prior stimuli and desensitised much more rapidly and strongly in response to subsequent stimuli. This could explain tachyphylaxis of SABAs over repeated use in rescue therapy of asthma patients. The LABA models show that the long action of salmeterol can be explained due to decreased stability of the arrestin/β2AR/salmeterol complex. This could explain long action of β-agonists used in maintenance therapy of asthma patients. Our consensus model of PKA/PDE/GRK-mediated β2AR regulation is being used to identify the dominant β2AR desensitisation pathways under different therapeutic regimens in human airway cells. In summary our models represent a significant advance towards understanding agonist-specific β2AR regulation that will aid in a more rational use of the β2AR agonists in the treatment of asthma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation termination codons (PTCs) are recognized and degraded through a process termed nonsense-mediated mRNA decay (NMD). To get more insight into the recruitment of the central NMD factor UPF1 to target mRNAs, we mapped transcriptome-wide UPF1-binding sites by individual-nucleotide-resolution UV cross-linking and immunoprecipitation (iCLIP) in human cells and found that UPF1 preferentially associated with 3′ UTRs in translationally active cells but underwent significant redistribution toward coding regions (CDS) upon translation inhibition. This indicates that UPF1 binds RNA before translation and gets displaced from the CDS by translating ribosomes. Corroborated by RNA immunoprecipitation and by UPF1 cross-linking to long noncoding RNAs, our evidence for translation-independent UPF1-RNA interaction suggests that the triggering of NMD occurs after UPF1 binding to mRNA, presumably through activation of RNA-bound UPF1 by aberrant translation termination. Unlike in yeast, in mammalian cells NMD has been reported to be restricted to cap-binding complex (CBC)–bound mRNAs during the pioneer round of translation. However, we compared decay kinetics of two NMD reporter genes in mRNA fractions bound to either CBC or the eukaryotic initiation factor 4E (eIF4E) in human cells and show that NMD destabilizes eIF4E-bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonsense-mediated mRNA decay (NMD) pathway is best known as a translation-coupled quality control system that recognizes and degrades aberrant mRNAs with ORF-truncating premature termination codons (PTCs), but a more general role of NMD in posttranscriptional regulation of gene expression is indicated by transcriptome-wide mRNA profilings that identified a plethora of physiological mRNAs as NMD substrates. We try to decipher the mechanism of mRNA targeting to the NMD pathway in human cells. Recruitment of the conserved RNA-binding helicase UPF1 to target mRNAs has been reported to occur through interaction with release factors at terminating ribosomes, but evidence for translation-independent interaction of UPF1 with the 3’ untranslated region (UTR) of mRNAs has also been reported. We have transcriptome-wide determined the UPF1 binding sites by individual-nucleotide resolution UV crosslinking and immunoprecipitation (iCLIP) in human cells, untreated or after inhibiting translation. We detected a strongly enriched association of UPF1 with 3’ UTRs in undisturbed, translationally active cells. After translation inhibition, a significant increase in UPF1 binding to coding sequence (CDS) was observed, indicating that UPF1 binds RNA before translation and gets displaced from the CDS by translating ribosomes. This suggests that the decision to trigger NMD occurs after association of UPF1 with mRNA, presumably through activation of RNA-bound UPF1 by aberrant translation termination. In a second recent study, we re-visited the reported restriction of NMD in mammals to the ‘pioneer round of translation’, i.e. to cap-binding complex (CBC)-bound mRNAs. The limitation of mammalian NMD to early rounds of translation would indicate a – from an evolutionary perspective – unexpected mechanistic difference to NMD in yeast and plants, where PTC-containing mRNAs seem to be available to NMD at each round of translation. In contrast to previous reports, our comparison of decay kinetics of two NMD reporter genes in mRNA fractions bound to either CBC or the eukaryotic initiation factor 4E (eIF4E) in human cells revealed that NMD destabilizes eIF4E-bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Immunoassays are essential in the workup of patients with suspected heparin-induced thrombocytopenia. However, the diagnostic accuracy is uncertain with regard to different classes of assays, antibody specificities, thresholds, test variations, and manufacturers. We aimed to assess diagnostic accuracy measures of available immunoassays and to explore sources of heterogeneity. We performed comprehensive literature searches and applied strict inclusion criteria. Finally, 49 publications comprising 128 test evaluations in 15 199 patients were included in the analysis. Methodological quality according to the revised tool for quality assessment of diagnostic accuracy studies was moderate. Diagnostic accuracy measures were calculated with the unified model (comprising a bivariate random-effects model and a hierarchical summary receiver operating characteristics model). Important differences were observed between classes of immunoassays, type of antibody specificity, thresholds, application of confirmation step, and manufacturers. Combination of high sensitivity (>95%) and high specificity (>90%) was found in 5 tests only: polyspecific enzyme-linked immunosorbent assay (ELISA) with intermediate threshold (Genetic Testing Institute, Asserachrom), particle gel immunoassay, lateral flow immunoassay, polyspecific chemiluminescent immunoassay (CLIA) with a high threshold, and immunoglobulin G (IgG)-specific CLIA with low threshold. Borderline results (sensitivity, 99.6%; specificity, 89.9%) were observed for IgG-specific Genetic Testing Institute-ELISA with low threshold. Diagnostic accuracy appears to be inadequate in tests with high thresholds (ELISA; IgG-specific CLIA), combination of IgG specificity and intermediate thresholds (ELISA, CLIA), high-dose heparin confirmation step (ELISA), and particle immunofiltration assay. When making treatment decisions, clinicians should be a aware of diagnostic characteristics of the tests used and it is recommended they estimate posttest probabilities according to likelihood ratios as well as pretest probabilities using clinical scoring tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.