934 resultados para Computer simulation
Resumo:
El propósito de este Proyecto Fin de Carrera es el estudio acústico y electroacústico de la realización del musical “Hoy no me puedo levantar” en el Teatro Rialto de Madrid en 2005. En primer lugar, se realiza una breve introducción histórica, citando sus remodelaciones y comentando la situación actual del recinto. Posteriormente, es analizado el equipo de sonido empleado en el espectáculo a partir de cada uno de los distintos controles de sonido: FOH (Front of Hause), monitores y microfonía inalámbrica. De cada uno de ellos se explican sus principales funciones y los sistemas que los conforman. También se describe la utilización de las cabinas insonorizadas. A continuación, se detallan los sistemas electroacústicos (empleados en el diseño) de la sonorización de dicho musical, que se consideran divididos en las siguientes partes: sistema principal, refuerzos y retardos, efectos y monitores. Además, se detalla el software RMS (Remote Monitoring System), que aporta información del funcionamiento de estos sistemas en tiempo real. Seguidamente, se exponen el equipo, procedimiento y resultados de la medida in situ en el Teatro, aplicando la Norma UNE-EN ISO 3382-2/2008 para obtener el tiempo de reverberación y ruido de fondo. Con el objeto de inicializar la simulación por ordenador, primero se exportan los planos originales de AutoCAD a EASE 4.4, donde se finaliza el proceso de modelar el recinto. Posteriormente, se asignan materiales, áreas de audiencia, puntos de escucha y se ubican los sistemas electroacústicos. Se afina el tiempo de reverberación obtenido en la medida in situ mediante materiales de la base de datos del propio software. También se ajustan los sistemas electroacústicos en el recinto para obtener la ecualización empleada y los niveles de presión sonora directo y total para distintas frecuencias. Una vez finalizados los pasos anteriores, se procede a realizar estudios psicoacústicos para comprobar posibles ecos y el efecto precedencia (empleando retardos electrónicos o delays). Finalmente, se realizan estudios de inteligibilidad, en los que se justifica la Claridad de Voz (C50) y Claridad Musical (C80); el Índice de inteligibilidad del habla (SII), la Pérdida de articulación de consonantes (Alcons) y el Índice de transmisión del habla (STI). Por último se expone el presupuesto del proyecto y del alquiler del equipo de sonido del musical y se exponen las conclusiones del Proyecto Final de Carrera. ABSTRACT. The purpose of this Final Degree Project is the acoustic and electro-acoustic study of the musical “Hoy No Me Puedo Levantar” at Teatro Rialto in 2005 (Madrid, Spain). First of all, a brief review of its history is made, quoting its refurbishments and discussing the current situation of this enclosure. Later, the sound equipment of the show is analyzed through every different sound controls: FOH (Front Of House), monitors and wireless microphones. There is also an explanation about their principal functions and systems, as well as a description of the soundproof cabins. Then, the electro-acoustic systems are detailed and divided in the following parts: main system, boosters and delays, effects and monitors. The RMS software (Remote Monitoring System) is described too, since it gives relevant information of the systems operations in real time. Afterwards, equipment, procedures and results of the measurements are exposed, applying the UNE-EN ISO 3382-2/2008 regulation in order to obtain the reverberation time and background noise of the theatre. With the purpose of initialize the computer simulation, original plans are exported from AutoCad to EASE 4.4., where its modeling process is ended. Materials, audience areas, hearing points and electro-acoustic locations are assigned below. At the same time, reverberation time is tuned up using database materials of the software itself. Also, electro-acoustic systems of the enclosure are adjusted to get the equalization and pressure sound levels of the different frequencies. Once previous steps are finished, psycho-acoustic studies are made to check possible echoes and the precedence effect - using electronic delays -. Finally, intelligibility studies are detailed, where the Voice and Musical Clarities are justified: The Speech Intelligibility Index, the Loss of Consonants Articulation and the Talk Transmission Index. This Final Degree Project ends describing the budget and rent of the sound equipment and the final conclusions.
Resumo:
The temperature in a ferromagnetic nanostripe with a notch subject to Joule heating has been studied in detail. We first performed an experimental real-time calibration of the temperature versus time as a 100 ns current pulse was injected into a Permalloy nanostripe. This calibration was repeated for different pulse amplitudes and stripe dimensions and the set of experimental curves were fitted with a computer simulation using the Fourier thermal conduction equation. The best fit of these experimental curves was obtained by including the temperature-dependent behavior of the electrical resistivity of the Permalloy and of the thermal conductivity of thesubstrate(SiO2). Notably, a nonzero interface thermal resistance between the metallic nanostripe and thesubstrate was also necessary to fit the experimental curves. We found this parameter pivotal to understand ourresults and the results from previous works. The higher current density in the notch, together with the interface thermal resistance, allows a considerable increase of the temperature in the notch, creating a large horizontal thermal gradient. This gradient, together with the high temperature in the notch and the larger current density close to the edges of the notch, can be very influential in experiments studying the current assisted domain wall motion.
Resumo:
Heparinase I from Flavobacterium heparinum has important uses for elucidating the complex sequence heterogeneity of heparin-like glycosaminoglycans (HLGAGs). Understanding the biological function of HLGAGs has been impaired by the limited methods for analysis of pure or mixed oligosaccharide fragments. Here, we use methodologies involving MS and capillary electrophoresis to investigate the sequence of events during heparinase I depolymerization of HLGAGs. In an initial step, heparinase I preferentially cleaves exolytically at the nonreducing terminal linkage of the HLGAG chain, although it also cleaves internal linkages at a detectable rate. In a second step, heparinase I has a strong preference for cleaving the same substrate molecule processively, i.e., to cleave the next site toward the reducing end of the HLGAG chain. Computer simulation showed that the experimental results presented here from analysis of oligosaccharide degradation were consistent with literature data for degradation of polymeric HLGAG by heparinase I. This study presents direct evidence for a predominantly exolytic and processive mechanism of depolymerization of HLGAG by heparinase I.
Resumo:
Protein folding occurs on a time scale ranging from milliseconds to minutes for a majority of proteins. Computer simulation of protein folding, from a random configuration to the native structure, is nontrivial owing to the large disparity between the simulation and folding time scales. As an effort to overcome this limitation, simple models with idealized protein subdomains, e.g., the diffusion–collision model of Karplus and Weaver, have gained some popularity. We present here new results for the folding of a four-helix bundle within the framework of the diffusion–collision model. Even with such simplifying assumptions, a direct application of standard Brownian dynamics methods would consume 10,000 processor-years on current supercomputers. We circumvent this difficulty by invoking a special Brownian dynamics simulation. The method features the calculation of the mean passage time of an event from the flux overpopulation method and the sampling of events that lead to productive collisions even if their probability is extremely small (because of large free-energy barriers that separate them from the higher probability events). Using these developments, we demonstrate that a coarse-grained model of the four-helix bundle can be simulated in several days on current supercomputers. Furthermore, such simulations yield folding times that are in the range of time scales observed in experiments.
Resumo:
Whole-genome duplication approximately 108 years ago was proposed as an explanation for the many duplicated chromosomal regions in Saccharomyces cerevisiae. Here we have used computer simulations and analytic methods to estimate some parameters describing the evolution of the yeast genome after this duplication event. Computer simulation of a model in which 8% of the original genes were retained in duplicate after genome duplication, and 70–100 reciprocal translocations occurred between chromosomes, produced arrangements of duplicated chromosomal regions very similar to the map of real duplications in yeast. An analytical method produced an independent estimate of 84 map disruptions. These results imply that many smaller duplicated chromosomal regions exist in the yeast genome in addition to the 55 originally reported. We also examined the possibility of determining the original order of chromosomal blocks in the ancestral unduplicated genome, but this cannot be done without information from one or more additional species. If the genome sequence of one other species (such as Kluyveromyces lactis) were known it should be possible to identify 150–200 paired regions covering the whole yeast genome and to reconstruct approximately two-thirds of the original order of blocks of genes in yeast. Rates of interchromosome translocation in yeast and mammals appear similar despite their very different rates of homologous recombination per kilobase.
Resumo:
In the maximum parsimony (MP) and minimum evolution (ME) methods of phylogenetic inference, evolutionary trees are constructed by searching for the topology that shows the minimum number of mutational changes required (M) and the smallest sum of branch lengths (S), respectively, whereas in the maximum likelihood (ML) method the topology showing the highest maximum likelihood (A) of observing a given data set is chosen. However, the theoretical basis of the optimization principle remains unclear. We therefore examined the relationships of M, S, and A for the MP, ME, and ML trees with those for the true tree by using computer simulation. The results show that M and S are generally greater for the true tree than for the MP and ME trees when the number of nucleotides examined (n) is relatively small, whereas A is generally lower for the true tree than for the ML tree. This finding indicates that the optimization principle tends to give incorrect topologies when n is small. To deal with this disturbing property of the optimization principle, we suggest that more attention should be given to testing the statistical reliability of an estimated tree rather than to finding the optimal tree with excessive efforts. When a reliability test is conducted, simplified MP, ME, and ML algorithms such as the neighbor-joining method generally give conclusions about phylogenetic inference very similar to those obtained by the more extensive tree search algorithms.
Resumo:
The proneural genes encode basic-helix–loop–helix (bHLH) proteins and promote the formation of distinct types of sensory organs. In Drosophila, two sets of proneural genes, atonal (ato) and members of the achaete–scute complex (ASC), are required for the formation of chordotonal (ch) organs and external sensory (es) organs, respectively. We assayed the production of sensory organs in transgenic flies expressing chimeric genes of ato and scute (sc), a member of ASC, and found that the information that specifies ch organs resides in the bHLH domain of ato; chimeras containing the b domain of ato and the HLH domain of sc also induced ch organ formation, but to a lesser extent than those containing the bHLH domain of ato. The b domains of ato and sc differ in seven residues. Mutations of these seven residues in the b domain of ato suggest that most or perhaps all of these residues are required for induction of ch organs. None of these seven residues is predicted to contact DNA directly by computer simulation using the structure of the myogenic factor MyoD as a model, implying that interaction of ato with other cofactors is likely to be involved in neuronal type specification.
Resumo:
A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.
Resumo:
A transition as a function of increasing temperature from harmonic to anharmonic dynamics has been observed in globular proteins by using spectroscopic, scattering, and computer simulation techniques. We present here results of a dynamic neutron scattering analysis of the solvent dependence of the picosecond-time scale dynamic transition behavior of solutions of a simple single-subunit enzyme, xylanase. The protein is examined in powder form, in D2O, and in four two-component perdeuterated single-phase cryosolvents in which it is active and stable. The scattering profiles of the mixed solvent systems in the absence of protein are also determined. The general features of the dynamic transition behavior of the protein solutions follow those of the solvents. The dynamic transition in all of the mixed cryosolvent–protein systems is much more gradual than in pure D2O, consistent with a distribution of energy barriers. The differences between the dynamic behaviors of the various cryosolvent protein solutions themselves are remarkably small. The results are consistent with a picture in which the picosecond-time scale atomic dynamics respond strongly to melting of pure water solvent but are relatively invariant in cryosolvents of differing compositions and melting points.
Resumo:
When individual amoebae of the cellular slime mold Dictyostelium discoideum are starving, they aggregate to form a multicellular migrating slug, which moves toward a region suitable for culmination. The culmination of the morphogenesis involves complex cell movements that transform a mound of cells into a globule of spores on a slender stalk. The movement has been likened to a “reverse fountain,” whereby prestalk cells in the upper part form a stalk that moves downwards and anchors to the substratum, while prespore cells in the lower part move upwards to form the spore head. So far, however, no satisfactory explanation has been produced for this process. Using a computer simulation that we developed, we now demonstrate that the processes that are essential during the earlier stages of the morphogenesis are in fact sufficient to produce the dynamics of the culmination stage. These processes are cAMP signaling, differential adhesion, cell differentiation, and production of extracellular matrix. Our model clarifies the processes that generate the observed cell movements. More specifically, we show that periodic upward movements, caused by chemotactic motion, are essential for successful culmination, because the pressure waves they induce squeeze the stalk downwards through the cell mass. The mechanisms revealed by our model have a number of self-organizing and self-correcting properties and can account for many previously unconnected and unexplained experimental observations.
Resumo:
Tranformed-rule up and down psychophysical methods have gained great popularity, mainly because they combine criterion-free responses with an adaptive procedure allowing rapid determination of an average stimulus threshold at various criterion levels of correct responses. The statistical theory underlying the methods now in routine use is based on sets of consecutive responses with assumed constant probabilities of occurrence. The response rules requiring consecutive responses prevent the possibility of using the most desirable response criterion, that of 75% correct responses. The earliest transformed-rule up and down method, whose rules included nonconsecutive responses, did not contain this limitation but failed to become generally accepted, lacking a published theoretical foundation. Such a foundation is provided in this article and is validated empirically with the help of experiments on human subjects and a computer simulation. In addition to allowing the criterion of 75% correct responses, the method is more efficient than the methods excluding nonconsecutive responses in their rules.
Resumo:
The aggregation stage of the life cycle of Dictyostelium discoideum is governed by the chemotactic response of individual amoebae to excitable waves of cAMP. We modeled this process through a recently introduced hybrid automata-continuum scheme and used computer simulation to unravel the role of specific components of this complex developmental process. Our results indicated an essential role for positive feedback between the cAMP signaling and the expression of the genes encoding the signal transduction and response machinery.
Resumo:
As additivity is a very useful property for a distance measure, a general additive distance is proposed under the stationary time-reversible (SR) model of nucleotide substitution or, more generally, under the stationary, time-reversible, and rate variable (SRV) model, which allows rate variation among nucleotide sites. A method for estimating the mean distance and the sampling variance is developed. In addition, a method is developed for estimating the variance-covariance matrix of distances, which is useful for the statistical test of phylogenies and molecular clocks. Computer simulation shows (i) if the sequences are longer than, say, 1000 bp, the SR method is preferable to simpler methods; (ii) the SR method is robust against deviations from time-reversibility; (iii) when the rate varies among sites, the SRV method is much better than the SR method because the distance is seriously underestimated by the SR method; and (iv) our method for estimating the sampling variance is accurate for sequences longer than 500 bp. Finally, a test is constructed for testing whether DNA evolution follows a general Markovian model.
Resumo:
Phosphoramide mustard-induced DNA interstrand cross-links were studied both in vitro and by computer simulation. The local determinants for the formation of phosphoramide mustard-induced DNA interstrand cross-links were defined by using different pairs of synthetic oligonucleotide duplexes, each of which contained a single potentially cross-linkable site. Phosphoramide mustard was found to cross-link dG to dG at a 5'-d(GAC)-3'. The structural basis for the formation of this 1,3 cross-link was studied by molecular dynamics and quantum chemistry. Molecular dynamics indicated that the geometrical proximity of the binding sites also favored a 1,3 dG-to-dG linkage over a 1,2 dG-to-dG linkage in a 5'-d(GCC)-3' sequence. While the enthalpies of 1,2 and 1,3 mustard cross-linked DNA were found to be very close, a 1,3 structure was more flexible and may therefore be in a considerably higher entropic state.
Resumo:
Introdução: Grande parte das ações para promover a atividade física no lazer em populações tem apresentado tamanhos de efeito pequenos ou inexistentes, ou resultados inconsistentes. Abordar o problema a partir da perspectiva sistêmica pode ser uma das formas de superar esse descompasso. Objetivo: Desenvolver um modelo baseado em agentes para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos a partir da interação entre atributos psicológicos dos indivíduos e atributos dos ambientes físico construído e social em que vivem. Métodos: O processo de modelagem foi composto por três etapas: elaboração de um mapa conceitual, com base em revisão da literatura e consulta com especialistas; criação e verificação do algoritmo do modelo; e parametrização e análise de consistência e sensibilidade. Os resultados da revisão da literatura foram consolidados e relatados de acordo com os domínios da busca (aspectos psicológicos, ambiente social e ambiente físico construído). Os resultados quantitativos da consulta com os especialistas foram descritos por meio de frequências e o conteúdo das respostas questões abertas foi analisado e compilado pelo autor desta tese. O algoritmo do modelo foi criado no software NetLogo, versão 5.2.1., seguindo-se um protocolo de verificação para garantir que o algoritmo fosse implementado acuradamente. Nas análises de consistência e sensibilidade, utilizaram-se o Teste A de Vargha-Delaney, coeficiente de correlação de postos parcial, boxplots e gráficos de linha e de dispersão. Resultados: Definiram-se como elementos do mapa conceitual a intenção da pessoa, o comportamento de pessoas próximas e da comunidade, e a percepção da qualidade, do acesso e das atividades disponíveis nos locais em que atividade física no lazer pode ser praticada. O modelo representa uma comunidade hipotética contendo dois tipos de agentes: pessoas e locais em que atividade física no lazer pode ser praticada. As pessoas interagem entre si e com o ambiente construído, gerando tendências temporais populacionais de prática de atividade física no lazer e de intenção. As análises de sensibilidade indicaram que as tendências temporais de atividade física no lazer e de intenção são altamente sensíveis à influência do comportamento atual da pessoa sobre a sua intenção futura, ao tamanho do raio de percepção da pessoa e à proporção de locais em que a atividade física no lazer pode ser praticada. Considerações finais: O mapa conceitual e o modelo baseado em agentes se mostraram adequados para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos. A influência do comportamento da pessoa sobre a sua intenção, o tamanho do raio de percepção da pessoa e a proporção de locais em que a atividade física no lazer pode ser praticada são importantes determinantes da conformação e evolução dos padrões populacionais de atividade física no lazer entre adultos no modelo.