976 resultados para Unified Model Reference


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this review, we attempt to summarize, in a critical manner, what is currently known about the processes of condensation and decondensation of chromatin fibers. We begin with a critical analysis of the possible mechanisms for condensation, considering both old and new evidence as to whether the linker DNA between nucleosomes bends or remains straight in the condensed structure. Concluding that the preponderance of evidence is for straight linkers, we ask what other fundamental process might allow condensation, and argue that there is evidence for linker histone-induced contraction of the internucleosome angle, as salt concentration is raised toward physiological levels. We also ask how certain specific regions of chromatin can become decondensed, even at physiological salt concentration, to allow transcription. We consider linker histone depletion and acetylation of the core histone tails, as possible mechanisms. On the basis of recent evidence, we suggest a unified model linking targeted acetylation of specific genomic regions to linker histone depletion, with unfolding of the condensed fiber as a consequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta pesquisa apresenta estudo de caso cujo objetivo foi analisar a aceitação do Portal Inovação, identificando os fatores preditivos da intenção comportamental de uso e do comportamento de uso direcionadores da adoção da tecnologia por seus usuários via extensão do Modelo Unificado de Aceitação de Tecnologia, denominado pela sigla UTAUT (Unified Theory of Acceptance and Use of Technololgy) de Venkatesh et al. (2003). O objeto da pesquisa o Portal Inovação foi desenvolvido pelo Ministério da Ciência, Tecnologia e Inovação (MCTI) em parceria com o Centro de Gestão e Estudos Estratégicos (CGEE), Associação Brasileira de Desenvolvimento Industrial (ABDI) e Instituto Stela, visando atender às demandas do Sistema Nacional de Ciência, Tecnologia e Inovação (SNCTI) do País. Para atingir os objetivos propostos, recorreu-se às abordagens qualitativa, que foi subsidiada pelo método estudo de caso (YIN, 2005) e quantitativa, apoiada pela metodologia UTAUT, aplicada a usuários do portal e que contemplou o resultado de 264 respondentes validados. Quanto ao material de análise, utilizou-se da pesquisa bibliográfica sobre governo eletrônico (e-Gov), Internet, Sistema Nacional de Inovação, modelos de aceitação de tecnologia, dados oficiais públicos e legislações atinentes ao setor de inovação tecnológica. A técnica de análise empregada quantitativamente consistiu no uso de modelagem por equações estruturais, com base no algoritmo PLS (Partial Least Square) com bootstrap de 1.000 reamostragens. Os principais resultados obtidos demonstraram alta magnitude e significância preditiva sobre a Intenção Comportamental de Uso do Portal pelos fatores: Expectativa de Desempenho e Influência Social. Além de evidenciarem que as condições facilitadoras impactam significativamente sobre o Comportamento de Uso dos usuários. A conclusão principal do presente estudo é a de que ao considerarmos a aceitação de um portal governamental em que a adoção é voluntária, o fator social é altamente influente na intenção de uso da tecnologia, bem como os aspectos relacionados à produtividade consequente do usuário e o senso de utilidade; além da facilidade de interação e domínio da ferramenta. Tais constatações ensejam em novas perspectivas de pesquisa e estudos no âmbito das ações de e-Gov, bem como no direcionamento adequado do planejamento, monitoramento e avaliação de projetos governamentais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synchronous machines, widely used in energy generation systems, require constant voltage and frequency to obtain good quality of energy. However, for large load variati- ons, it is difficult to maintain outputs on nominal values due to parametric uncertainties, nonlinearities and coupling among variables. Then, we propose to apply the Dual Mode Adaptive Robust Controller (DMARC) in the field flux control loop, replacing the tradi- tional PI controller. The DMARC links a Model Reference Adaptive Controller (MRAC) and a Variable Structure Model Reference Adaptive Controller (VS-MRAC), incorpora- ting transient performance advantages from VS-MRAC and steady state properties from MRAC. Moreover, simulation results are included to corroborate the theoretical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controle de sistemas MIMO (Multiple Input Multiple Output) é muitas vezes realizado por várias malhas de controladores clássicos que operam com restrições e apresentam baixo desempenho. Técnicas de controle adaptativo são uma alternativa interessante para aumentar o rendimento desses sistemas, como por exemplo os controladores MRAC (Model Reference Adaptive Control), que quando bem projetados, permitem que a dinâmica da planta seja escolhida de maneira a seguir um modelo de referência. O presente trabalho apresenta uma estratégia de desacoplamento para um sistema MIMO de três tanques acoplados e o projeto de um controlador MRAC para o mesmo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controle de sistemas MIMO (Multiple Input Multiple Output) é muitas vezes realizado por várias malhas de controladores clássicos que operam com restrições e apresentam baixo desempenho. Técnicas de controle adaptativo são uma alternativa interessante para aumentar o rendimento desses sistemas, como por exemplo os controladores MRAC (Model Reference Adaptive Control), que quando bem projetados, permitem que a dinâmica da planta seja escolhida de maneira a seguir um modelo de referência. O presente trabalho apresenta uma estratégia de desacoplamento para um sistema MIMO de três tanques acoplados e o projeto de um controlador MRAC para o mesmo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various unification schemes interpret the complex phenomenology of quasars and luminous active galactic nuclei (AGN) in terms of a simple picture involving a central black hole, an accretion disc and an associated outflow. Here, we continue our tests of this paradigm by comparing quasar spectra to synthetic spectra of biconical disc wind models, produced with our state-of-the-art Monte Carlo radiative transfer code. Previously, we have shown that we could produce synthetic spectra resembling those of observed broad absorption line (BAL) quasars, but only if the X-ray luminosity was limited to 1043 erg s-1. Here, we introduce a simple treatment of clumping, and find that a filling factor of ˜0.01 moderates the ionization state sufficiently for BAL features to form in the rest-frame UV at more realistic X-ray luminosities. Our fiducial model shows good agreement with AGN X-ray properties and the wind produces strong line emission in, e.g., Lyα and C IV 1550 Å at low inclinations. At high inclinations, the spectra possess prominent LoBAL features. Despite these successes, we cannot reproduce all emission lines seen in quasar spectra with the correct equivalent-width ratios, and we find an angular dependence of emission line equivalent width despite the similarities in the observed emission line properties of BAL and non-BAL quasars. Overall, our work suggests that biconical winds can reproduce much of the qualitative behaviour expected from a unified model, but we cannot yet provide quantitative matches with quasar properties at all viewing angles. Whether disc winds can successfully unify quasars is therefore still an open question.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the existing understanding of flame spread dynamics is enhanced through an extensive study of the heat transfer from flames spreading vertically upwards across 5 cm wide, 20 cm tall samples of extruded Poly (Methyl Methacrylate) (PMMA). These experiments have provided highly spatially resolved measurements of flame to surface heat flux and material burning rate at the critical length scale of interest, with a level of accuracy and detail unmatched by previous empirical or computational studies. Using these measurements, a wall flame model was developed that describes a flame’s heat feedback profile (both in the continuous flame region and the thermal plume above) solely as a function of material burning rate. Additional experiments were conducted to measure flame heat flux and sample mass loss rate as flames spread vertically upwards over the surface of seven other commonly used polymers, two of which are glass reinforced composite materials. Using these measurements, our wall flame model has been generalized such that it can predict heat feedback from flames supported by a wide range of materials. For the seven materials tested here – which present a varied range of burning behaviors including dripping, polymer melt flow, sample burnout, and heavy soot formation – model-predicted flame heat flux has been shown to match experimental measurements (taken across the full length of the flame) with an average accuracy of 3.9 kW m-2 (approximately 10 – 15 % of peak measured flame heat flux). This flame model has since been coupled with a powerful solid phase pyrolysis solver, ThermaKin2D, which computes the transient rate of gaseous fuel production of constituents of a pyrolyzing solid in response to an external heat flux, based on fundamental physical and chemical properties. Together, this unified model captures the two fundamental controlling mechanisms of upward flame spread – gas phase flame heat transfer and solid phase material degradation. This has enabled simulations of flame spread dynamics with a reasonable computational cost and accuracy beyond that of current models. This unified model of material degradation provides the framework to quantitatively study material burning behavior in response to a wide range of common fire scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new radiation scheme for the Oxford Planetary Unified Model System for Venus, suitable for the solar and thermal bands. This new and fast radiative parameterization uses a different approach in the two main radiative wavelength bands: solar radiation (0.1-5.5 mu m) and thermal radiation (1.7-260 mu m). The solar radiation calculation is based on the delta-Eddington approximation (two-stream-type) with an adding layer method. For the thermal radiation case, a code based on an absorptivity/emissivity formulation is used. The new radiative transfer formulation implemented is intended to be computationally light, to allow its incorporation in 3D global circulation models, but still allowing for the calculation of the effect of atmospheric conditions on radiative fluxes. This will allow us to investigate the dynamical-radiative-microphysical feedbacks. The model flexibility can be also used to explore the uncertainties in the Venus atmosphere such as the optical properties in the deep atmosphere or cloud amount. The results of radiative cooling and heating rates and the global-mean radiative-convective equilibrium temperature profiles for different atmospheric conditions are presented and discussed. This new scheme works in an atmospheric column and can be easily implemented in 3D Venus global circulation models. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La ricerca affronta la questione della punizione nella prospettiva del diritto costituzionale nazionale integrata con quella del diritto europeo dei diritti dell’uomo. Nella Parte I è sostenuta la tesi secondo cui la trasformazione della Costituzione penale avviata sotto l’influsso della giurisprudenza CEDU rappresenta complessivamente un avanzamento nel processo di costituzionalizzazione del potere punitivo. Questa conclusione è supportata attraverso un confronto della filosofia costituzionale classica sulla punizione con i diversi approcci interpretativi alla Costituzione penale sviluppati durante il XX secolo (approcci tradizionale, costituzionalistico ed EDU). Nella Parte II è invece sostenuta la tesi secondo cui, nonostante gli effetti positivi dell’armonizzazione sovranazionale, lo statuto costituzionale della punizione dovrebbe comunque rimanere formalmente autonomo dal diritto EDU. Non solo, infatti, nessun paradigma dei rapporti interordinamentali finora sviluppato può giustificarne un’integrazione totale, ma essa rischierebbe anche di diminuire la normatività dell’aspetto sociale della Costituzione penale, già ipocostituzionalizzato rispetto a quello liberale. Nella Conclusione sono quindi sviluppati gli elementi fondamentali di un approccio interpretativo alternativo alla Costituzione penale che risponda meglio di quelli esistenti alle esigenze sia di garantire la massima costituzionalizzazione della punizione sia di facilitare l’integrazione sovranazionale. In base a un simile approccio costituzionalmente fondato, sostanzialista, rights-based e inclusivo di tutte le ideologie costituenti, la Costituzione potrebbe essere letta nel senso di prevedere un modello di disciplina unitario per tutte le forme di esercizio del potere punitivo (salvo quello disciplinare, distinguibile sotto l’aspetto istituzionale) caratterizzato da: una riserva di legge a intensità variabile; uno scrutinio stretto della Corte sulla giustificabilità costituzionale della pena; l’estensione dell’ambito di applicazione dei principi di colpevolezza e rieducazione; un pieno sviluppo degli aspetti di garanzia collettiva dei classici principi costituzionalpenalistici (obblighi di tutela penale e garanzia dell’effettiva collocazione della pena in capo al soggetto colpevole), nonché derivabili dall’art. 3 Cost. (proporzionalità della pena alle condizioni materiali del soggetto punito).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Large (>1600 mum), ingestively masticated particles of bermuda grass (Cynodon dactylon L. Pers.) leaf and stem labelled with Yb-169 and Ce-144 respectively were inserted into the rumen digesta raft of heifers grazing bermuda grass. The concentration of markers in digesta sampled from the raft and ventral rumen were monitored at regular intervals over approximately 144 h. The data from the two sampling sites were simultaneously fitted to two pool (raft and ventral rumen-reticulum) models with either reversible or sequential flow between the two pools. The sequential flow model fitted the data equally as well as the reversible flow model but the reversible flow model was used because of its greater application. The reversible flow model, hereafter called the raft model, had the following features: a relatively slow age-dependent transfer rate from the raft (means for a gamma 2 distributed rate parameter for leaf 0.0740 v. stem 0.0478 h(-1)), a very slow first order reversible flow from the ventral rumen to the raft (mean for leaf and stem 0.010 h(-1)) and a very rapid first order exit from the ventral rumen (mean of leaf and stem 0.44 h(-1)). The raft was calculated to occupy approximately 0.82 total rumen DM of the raft and ventral rumen pools. Fitting a sequential two pool model or a single exponential model individually to values from each of the two sampling sites yielded similar parameter values for both sites and faster rate parameters for leaf as compared with stem, in agreement with the raft model. These results were interpreted as indicating that the raft forms a large relatively inert pool within the rumen. Particles generated within the raft have difficulty escaping but once into the ventral rumen pool they escape quickly with a low probability of return to the raft. It was concluded that the raft model gave a good interpretation of the data and emphasized escape from and movement within the raft as important components of the residence time of leaf and stem particles within the rumen digesta of cattle.