894 resultados para Unified Model Reference


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and degraded by a process referred to as nonsense-mediated mRNA decay (NMD). The evolutionary conservation of the core NMD factors UPF1, UPF2 and UPF3 would imply a similar basic mechanism of PTC recognition in all eukaryotes. However, unlike NMD in yeast, which targets PTC-containing mRNAs irrespectively of whether their 5' cap is bound by the cap-binding complex (CBC) or by the eukaryotic initiation factor 4E (eIF4E), mammalian NMD has been claimed to be restricted to CBC-bound mRNAs during the pioneer round of translation. In our recent study we compared decay kinetics of two NMD reporter systems in mRNA fractions bound to either CBC or eIF4E in human cells. Our findings reveal that NMD destabilizes eIF4E bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event. Additionally, our results indicate that the closed loop structure of mRNA forms only after the replacement of CBC with eIF4E at the 5' cap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Immunoassays are essential in the workup of patients with suspected heparin-induced thrombocytopenia. However, the diagnostic accuracy is uncertain with regard to different classes of assays, antibody specificities, thresholds, test variations, and manufacturers. We aimed to assess diagnostic accuracy measures of available immunoassays and to explore sources of heterogeneity. We performed comprehensive literature searches and applied strict inclusion criteria. Finally, 49 publications comprising 128 test evaluations in 15 199 patients were included in the analysis. Methodological quality according to the revised tool for quality assessment of diagnostic accuracy studies was moderate. Diagnostic accuracy measures were calculated with the unified model (comprising a bivariate random-effects model and a hierarchical summary receiver operating characteristics model). Important differences were observed between classes of immunoassays, type of antibody specificity, thresholds, application of confirmation step, and manufacturers. Combination of high sensitivity (>95%) and high specificity (>90%) was found in 5 tests only: polyspecific enzyme-linked immunosorbent assay (ELISA) with intermediate threshold (Genetic Testing Institute, Asserachrom), particle gel immunoassay, lateral flow immunoassay, polyspecific chemiluminescent immunoassay (CLIA) with a high threshold, and immunoglobulin G (IgG)-specific CLIA with low threshold. Borderline results (sensitivity, 99.6%; specificity, 89.9%) were observed for IgG-specific Genetic Testing Institute-ELISA with low threshold. Diagnostic accuracy appears to be inadequate in tests with high thresholds (ELISA; IgG-specific CLIA), combination of IgG specificity and intermediate thresholds (ELISA, CLIA), high-dose heparin confirmation step (ELISA), and particle immunofiltration assay. When making treatment decisions, clinicians should be a aware of diagnostic characteristics of the tests used and it is recommended they estimate posttest probabilities according to likelihood ratios as well as pretest probabilities using clinical scoring tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this review, we attempt to summarize, in a critical manner, what is currently known about the processes of condensation and decondensation of chromatin fibers. We begin with a critical analysis of the possible mechanisms for condensation, considering both old and new evidence as to whether the linker DNA between nucleosomes bends or remains straight in the condensed structure. Concluding that the preponderance of evidence is for straight linkers, we ask what other fundamental process might allow condensation, and argue that there is evidence for linker histone-induced contraction of the internucleosome angle, as salt concentration is raised toward physiological levels. We also ask how certain specific regions of chromatin can become decondensed, even at physiological salt concentration, to allow transcription. We consider linker histone depletion and acetylation of the core histone tails, as possible mechanisms. On the basis of recent evidence, we suggest a unified model linking targeted acetylation of specific genomic regions to linker histone depletion, with unfolding of the condensed fiber as a consequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta pesquisa apresenta estudo de caso cujo objetivo foi analisar a aceitação do Portal Inovação, identificando os fatores preditivos da intenção comportamental de uso e do comportamento de uso direcionadores da adoção da tecnologia por seus usuários via extensão do Modelo Unificado de Aceitação de Tecnologia, denominado pela sigla UTAUT (Unified Theory of Acceptance and Use of Technololgy) de Venkatesh et al. (2003). O objeto da pesquisa o Portal Inovação foi desenvolvido pelo Ministério da Ciência, Tecnologia e Inovação (MCTI) em parceria com o Centro de Gestão e Estudos Estratégicos (CGEE), Associação Brasileira de Desenvolvimento Industrial (ABDI) e Instituto Stela, visando atender às demandas do Sistema Nacional de Ciência, Tecnologia e Inovação (SNCTI) do País. Para atingir os objetivos propostos, recorreu-se às abordagens qualitativa, que foi subsidiada pelo método estudo de caso (YIN, 2005) e quantitativa, apoiada pela metodologia UTAUT, aplicada a usuários do portal e que contemplou o resultado de 264 respondentes validados. Quanto ao material de análise, utilizou-se da pesquisa bibliográfica sobre governo eletrônico (e-Gov), Internet, Sistema Nacional de Inovação, modelos de aceitação de tecnologia, dados oficiais públicos e legislações atinentes ao setor de inovação tecnológica. A técnica de análise empregada quantitativamente consistiu no uso de modelagem por equações estruturais, com base no algoritmo PLS (Partial Least Square) com bootstrap de 1.000 reamostragens. Os principais resultados obtidos demonstraram alta magnitude e significância preditiva sobre a Intenção Comportamental de Uso do Portal pelos fatores: Expectativa de Desempenho e Influência Social. Além de evidenciarem que as condições facilitadoras impactam significativamente sobre o Comportamento de Uso dos usuários. A conclusão principal do presente estudo é a de que ao considerarmos a aceitação de um portal governamental em que a adoção é voluntária, o fator social é altamente influente na intenção de uso da tecnologia, bem como os aspectos relacionados à produtividade consequente do usuário e o senso de utilidade; além da facilidade de interação e domínio da ferramenta. Tais constatações ensejam em novas perspectivas de pesquisa e estudos no âmbito das ações de e-Gov, bem como no direcionamento adequado do planejamento, monitoramento e avaliação de projetos governamentais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synchronous machines, widely used in energy generation systems, require constant voltage and frequency to obtain good quality of energy. However, for large load variati- ons, it is difficult to maintain outputs on nominal values due to parametric uncertainties, nonlinearities and coupling among variables. Then, we propose to apply the Dual Mode Adaptive Robust Controller (DMARC) in the field flux control loop, replacing the tradi- tional PI controller. The DMARC links a Model Reference Adaptive Controller (MRAC) and a Variable Structure Model Reference Adaptive Controller (VS-MRAC), incorpora- ting transient performance advantages from VS-MRAC and steady state properties from MRAC. Moreover, simulation results are included to corroborate the theoretical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controle de sistemas MIMO (Multiple Input Multiple Output) é muitas vezes realizado por várias malhas de controladores clássicos que operam com restrições e apresentam baixo desempenho. Técnicas de controle adaptativo são uma alternativa interessante para aumentar o rendimento desses sistemas, como por exemplo os controladores MRAC (Model Reference Adaptive Control), que quando bem projetados, permitem que a dinâmica da planta seja escolhida de maneira a seguir um modelo de referência. O presente trabalho apresenta uma estratégia de desacoplamento para um sistema MIMO de três tanques acoplados e o projeto de um controlador MRAC para o mesmo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controle de sistemas MIMO (Multiple Input Multiple Output) é muitas vezes realizado por várias malhas de controladores clássicos que operam com restrições e apresentam baixo desempenho. Técnicas de controle adaptativo são uma alternativa interessante para aumentar o rendimento desses sistemas, como por exemplo os controladores MRAC (Model Reference Adaptive Control), que quando bem projetados, permitem que a dinâmica da planta seja escolhida de maneira a seguir um modelo de referência. O presente trabalho apresenta uma estratégia de desacoplamento para um sistema MIMO de três tanques acoplados e o projeto de um controlador MRAC para o mesmo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various unification schemes interpret the complex phenomenology of quasars and luminous active galactic nuclei (AGN) in terms of a simple picture involving a central black hole, an accretion disc and an associated outflow. Here, we continue our tests of this paradigm by comparing quasar spectra to synthetic spectra of biconical disc wind models, produced with our state-of-the-art Monte Carlo radiative transfer code. Previously, we have shown that we could produce synthetic spectra resembling those of observed broad absorption line (BAL) quasars, but only if the X-ray luminosity was limited to 1043 erg s-1. Here, we introduce a simple treatment of clumping, and find that a filling factor of ˜0.01 moderates the ionization state sufficiently for BAL features to form in the rest-frame UV at more realistic X-ray luminosities. Our fiducial model shows good agreement with AGN X-ray properties and the wind produces strong line emission in, e.g., Lyα and C IV 1550 Å at low inclinations. At high inclinations, the spectra possess prominent LoBAL features. Despite these successes, we cannot reproduce all emission lines seen in quasar spectra with the correct equivalent-width ratios, and we find an angular dependence of emission line equivalent width despite the similarities in the observed emission line properties of BAL and non-BAL quasars. Overall, our work suggests that biconical winds can reproduce much of the qualitative behaviour expected from a unified model, but we cannot yet provide quantitative matches with quasar properties at all viewing angles. Whether disc winds can successfully unify quasars is therefore still an open question.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the existing understanding of flame spread dynamics is enhanced through an extensive study of the heat transfer from flames spreading vertically upwards across 5 cm wide, 20 cm tall samples of extruded Poly (Methyl Methacrylate) (PMMA). These experiments have provided highly spatially resolved measurements of flame to surface heat flux and material burning rate at the critical length scale of interest, with a level of accuracy and detail unmatched by previous empirical or computational studies. Using these measurements, a wall flame model was developed that describes a flame’s heat feedback profile (both in the continuous flame region and the thermal plume above) solely as a function of material burning rate. Additional experiments were conducted to measure flame heat flux and sample mass loss rate as flames spread vertically upwards over the surface of seven other commonly used polymers, two of which are glass reinforced composite materials. Using these measurements, our wall flame model has been generalized such that it can predict heat feedback from flames supported by a wide range of materials. For the seven materials tested here – which present a varied range of burning behaviors including dripping, polymer melt flow, sample burnout, and heavy soot formation – model-predicted flame heat flux has been shown to match experimental measurements (taken across the full length of the flame) with an average accuracy of 3.9 kW m-2 (approximately 10 – 15 % of peak measured flame heat flux). This flame model has since been coupled with a powerful solid phase pyrolysis solver, ThermaKin2D, which computes the transient rate of gaseous fuel production of constituents of a pyrolyzing solid in response to an external heat flux, based on fundamental physical and chemical properties. Together, this unified model captures the two fundamental controlling mechanisms of upward flame spread – gas phase flame heat transfer and solid phase material degradation. This has enabled simulations of flame spread dynamics with a reasonable computational cost and accuracy beyond that of current models. This unified model of material degradation provides the framework to quantitatively study material burning behavior in response to a wide range of common fire scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new radiation scheme for the Oxford Planetary Unified Model System for Venus, suitable for the solar and thermal bands. This new and fast radiative parameterization uses a different approach in the two main radiative wavelength bands: solar radiation (0.1-5.5 mu m) and thermal radiation (1.7-260 mu m). The solar radiation calculation is based on the delta-Eddington approximation (two-stream-type) with an adding layer method. For the thermal radiation case, a code based on an absorptivity/emissivity formulation is used. The new radiative transfer formulation implemented is intended to be computationally light, to allow its incorporation in 3D global circulation models, but still allowing for the calculation of the effect of atmospheric conditions on radiative fluxes. This will allow us to investigate the dynamical-radiative-microphysical feedbacks. The model flexibility can be also used to explore the uncertainties in the Venus atmosphere such as the optical properties in the deep atmosphere or cloud amount. The results of radiative cooling and heating rates and the global-mean radiative-convective equilibrium temperature profiles for different atmospheric conditions are presented and discussed. This new scheme works in an atmospheric column and can be easily implemented in 3D Venus global circulation models. (C) 2014 Elsevier Ltd. All rights reserved.