639 resultados para redundancy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polycomb-like proteins 1-3 (PCL1-3) are substoichiometric components of the Polycomb-repressive complex 2 (PRC2) that are essential for association of the complex with chromatin. However, it remains unclear why three proteins with such apparent functional redundancy exist in mammals. Here we characterize their divergent roles in both positively and negatively regulating cellular proliferation. We show that while PCL2 and PCL3 are E2F-regulated genes expressed in proliferating cells, PCL1 is a p53 target gene predominantly expressed in quiescent cells. Ectopic expression of any PCL protein recruits PRC2 to repress the INK4A gene; however, only PCL2 and PCL3 confer an INK4A-dependent proliferative advantage. Remarkably, PCL1 has evolved a PRC2- and chromatin-independent function to negatively regulate proliferation. We show that PCL1 binds to and stabilizes p53 to induce cellular quiescence. Moreover, depletion of PCL1 phenocopies the defects in maintaining cellular quiescence associated with p53 loss. This newly evolved function is achieved by the binding of the PCL1 N-terminal PHD domain to the C-terminal domain of p53 through two unique serine residues, which were acquired during recent vertebrate evolution. This study illustrates the functional bifurcation of PCL proteins, which act in both a chromatin-dependent and a chromatin-independent manner to regulate the INK4A and p53 pathways.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers have proposed 1-factor, 2-factor, and bifactor solutions to the 12-item Consideration of Future Consequences Scale (CFCS-12). In order to overcome some measurement problems and to create a robust and conceptually useful two-factor scale the CFCS-12 was recently modified to include two new items and to become the CFCS-14. Using a University sample, we tested four competing models for the CFCS-14: (a) a 12-item unidimensional model, (b) a model fitted for two uncorrelated factors (CFC-Immediate and CFC-Future), (c) a model fitted for two correlated factors (CFC-I and CFC-F), and (d) a bifactor model. Results suggested that the addition of the two new items has strengthened the viability of a two factor solution of the CFCS-14. Results of linear regression models suggest that the CFC-F factor is redundant. Further studies using alcohol and mental health indicators are required to test this redundancy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os Sinais de Espalhamento de Espectro de Sequência Directa exibem propriedades cicloestacionárias que implicam redundância entre componentes de frequência espaçadas por múltiplos da taxa de símbolo. Nesta tese, é apresentado um cancelador de interferência multiutilizador (Cancelador por translação na frequência - FSC) que tira partido desta propriedade. Este cancelador linear opera no domínio da frequência no sinal espalhado de tal forma que minimiza a interferência e ruído na saída (Critério do Mínimo Erro Quadrado Médio). Além de testado para o caso de antena única, são avaliadas as performances das configurações de antenas múltiplas para o caso de beamforming e canais espacialmente descorrelacionados considerando sistemas síncronos e sistemas com desalinhamento no tempo dos perfis de canais (ambos UMTS-TDD). Essas configurações divergiam na ordem da combinação temporal, combinação espacial e detecção multiutilizador. As configurações FSC foram avaliadas quando concatenadas com o PIC-2D. Os resultados das simulações mostram consideráveis melhorias nos resultados relativamente ao RAKE-2D convencional e PIC-2D. Foi atingida performance próximo ao RAKE de utilizador único quando o FSC foi avaliado concatenado com PIC-2D em quase todas as configurações. Todas as configurações foram avaliadas com modulação QPSK, 8-PSK e 16-QAM. Foi introduzida codificação Turbo e identificou-se as situações da vantagem de utilização do FSC antes do PIC-2D. As modulações 8-PSK e 16-QAM foram igualmente testadas com codificação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Ciências da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Face recognition from images or video footage requires a certain level of recorded image quality. This paper derives acceptable bitrates (relating to levels of compression and consequently quality) of footage with human faces, using an industry implementation of the standard H.264/MPEG-4 AVC and the Closed-Circuit Television (CCTV) recording systems on London buses. The London buses application is utilized as a case study for setting up a methodology and implementing suitable data analysis for face recognition from recorded footage, which has been degraded by compression. The majority of CCTV recorders on buses use a proprietary format based on the H.264/MPEG-4 AVC video coding standard, exploiting both spatial and temporal redundancy. Low bitrates are favored in the CCTV industry for saving storage and transmission bandwidth, but they compromise the image usefulness of the recorded imagery. In this context, usefulness is determined by the presence of enough facial information remaining in the compressed image to allow a specialist to recognize a person. The investigation includes four steps: (1) Development of a video dataset representative of typical CCTV bus scenarios. (2) Selection and grouping of video scenes based on local (facial) and global (entire scene) content properties. (3) Psychophysical investigations to identify the key scenes, which are most affected by compression, using an industry implementation of H.264/MPEG-4 AVC. (4) Testing of CCTV recording systems on buses with the key scenes and further psychophysical investigations. The results showed a dependency upon scene content properties. Very dark scenes and scenes with high levels of spatial–temporal busyness were the most challenging to compress, requiring higher bitrates to maintain useful information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kinematic redundancy occurs when a manipulator possesses more degrees of freedom than those required to execute a given task. Several kinematic techniques for redundant manipulators control the gripper through the pseudo-inverse of the Jacobian, but lead to a kind of chaotic inner motion with unpredictable arm configurations. Such algorithms are not easy to adapt to optimization schemes and, moreover, often there are multiple optimization objectives that can conflict between them. Unlike single optimization, where one attempts to find the best solution, in multi-objective optimization there is no single solution that is optimum with respect to all indices. Therefore, trajectory planning of redundant robots remains an important area of research and more efficient optimization algorithms are needed. This paper presents a new technique to solve the inverse kinematics of redundant manipulators, using a multi-objective genetic algorithm. This scheme combines the closed-loop pseudo-inverse method with a multi-objective genetic algorithm to control the joint positions. Simulations for manipulators with three or four rotational joints, considering the optimization of two objectives in a workspace without and with obstacles are developed. The results reveal that it is possible to choose several solutions from the Pareto optimal front according to the importance of each individual objective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Under the pseudoinverse control, robots with kinematical redundancy exhibit an undesirable chaotic joint motion which leads to an erratic behavior. This paper studies the complexity of fractional dynamics of the chaotic response. Fourier and wavelet analysis provides a deeper insight, helpful to know better the lack of repeatability problem of redundant manipulators. This perspective for the study of the chaotic phenomena will permit the development of superior trajectory control algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To boost logic density and reduce per unit power consumption SRAM-based FPGAs manufacturers adopted nanometric technologies. However, this technology is highly vulnerable to radiation-induced faults, which affect values stored in memory cells, and to manufacturing imperfections. Fault tolerant implementations, based on Triple Modular Redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like module placement, the effects of multi- bit upsets (MBU) or fault accumulation, have also to be addressed. In case of a fault occurrence the correct operation of the affected module must be restored and/or the current state of the circuit coherently re-established. A solution that enables the autonomous restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in real-time, while keeping the normal operation of the circuit, is presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To increase the amount of logic available in SRAM-based FPGAs manufacturers are using nanometric technologies to boost logic density and reduce prices. However, nanometric scales are highly vulnerable to radiation-induced faults that affect values stored in memory cells. Since the functional definition of FPGAs relies on memory cells, they become highly prone to this type of faults. Fault tolerant implementations, based on triple modular redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like the effects of multi-bit upsets (MBU) or fault accumulation, have also to be addressed. Furthermore, in case of a fault occurrence the correct operation of the affected module must be restored and the current state of the circuit coherently re-established. A solution that enables the autonomous correct restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in realtime, while keeping the normal operation of the circuit, is presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The new generations of SRAM-based FPGA (field programmable gate array) devices are the preferred choice for the implementation of reconfigurable computing platforms intended to accelerate processing in real-time systems. However, FPGA's vulnerability to hard and soft errors is a major weakness to robust configurable system design. In this paper, a novel built-in self-healing (BISH) methodology, based on run-time self-reconfiguration, is proposed. A soft microprocessor core implemented in the FPGA is responsible for the management and execution of all the BISH procedures. Fault detection and diagnosis is followed by repairing actions, taking advantage of the dynamic reconfiguration features offered by new FPGA families. Meanwhile, modular redundancy assures that the system still works correctly

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Eletrotécnica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.