983 resultados para Computer-simulation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

cAMP-response element binding (CREB) proteins are involved in transcriptional regulation in a number of cellular processes (e.g., neural plasticity and circadian rhythms). The CREB family contains activators and repressors that may interact through positive and negative feedback loops. These loops can be generated by auto- and cross-regulation of expression of CREB proteins, via CRE elements in or near their genes. Experiments suggest that such feedback loops may operate in several systems (e.g., Aplysia and rat). To understand the functional implications of such feedback loops, which are interlocked via cross-regulation of transcription, a minimal model with a positive and negative loop was developed and investigated using bifurcation analysis. Bifurcation analysis revealed diverse nonlinear dynamics (e.g., bistability and oscillations). The stability of steady states or oscillations could be changed by time delays in the synthesis of the activator (CREB1) or the repressor (CREB2). Investigation of stochastic fluctuations due to small numbers of molecules of CREB1 and CREB2 revealed a bimodal distribution of CREB molecules in the bistability region. The robustness of the stable HIGH and LOW states of CREB expression to stochastic noise differs, and a critical number of molecules was required to sustain the HIGH state for days or longer. Increasing positive feedback or decreasing negative feedback also increased the lifetime of the HIGH state, and persistence of this state may correlate with long-term memory formation. A critical number of molecules was also required to sustain robust oscillations of CREB expression. If a steady state was near a deterministic Hopf bifurcation point, stochastic resonance could induce oscillations. This comparative analysis of deterministic and stochastic dynamics not only provides insights into the possible dynamics of CREB regulatory motifs, but also demonstrates a framework for understanding other regulatory processes with similar network architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Variable number of tandem repeats (VNTR) are genetic loci at which short sequence motifs are found repeated different numbers of times among chromosomes. To explore the potential utility of VNTR loci in evolutionary studies, I have conducted a series of studies to address the following questions: (1) What are the population genetic properties of these loci? (2) What are the mutational mechanisms of repeat number change at these loci? (3) Can DNA profiles be used to measure the relatedness between a pair of individuals? (4) Can DNA fingerprint be used to measure the relatedness between populations in evolutionary studies? (5) Can microsatellite and short tandem repeat (STR) loci which mutate stepwisely be used in evolutionary analyses?^ A large number of VNTR loci typed in many populations were studied by means of statistical methods developed recently. The results of this work indicate that there is no significant departure from Hardy-Weinberg expectation (HWE) at VNTR loci in most of the human populations examined, and the departure from HWE in some VNTR loci are not solely caused by the presence of population sub-structure.^ A statistical procedure is developed to investigate the mutational mechanisms of VNTR loci by studying the allele frequency distributions of these loci. Comparisons of frequency distribution data on several hundreds VNTR loci with the predictions of two mutation models demonstrated that there are differences among VNTR loci grouped by repeat unit sizes.^ By extending the ITO method, I derived the distribution of the number of shared bands between individuals with any kinship relationship. A maximum likelihood estimation procedure is proposed to estimate the relatedness between individuals from the observed number of shared bands between them.^ It was believed that classical measures of genetic distance are not applicable to analysis of DNA fingerprints which reveal many minisatellite loci simultaneously in the genome, because the information regarding underlying alleles and loci is not available. I proposed a new measure of genetic distance based on band sharing between individuals that is applicable to DNA fingerprint data.^ To address the concern that microsatellite and STR loci may not be useful for evolutionary studies because of the convergent nature of their mutation mechanisms, by a theoretical study as well as by computer simulation, I conclude that the possible bias caused by the convergent mutations can be corrected, and a novel measure of genetic distance that makes the correction is suggested. In summary, I conclude that hypervariable VNTR loci are useful in evolutionary studies of closely related populations or species, especially in the study of human evolution and the history of geographic dispersal of Homo sapiens. (Abstract shortened by UMI.) ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of DNA sequence evolution and methods for estimating evolutionary distances are needed for studying the rate and pattern of molecular evolution and for inferring the evolutionary relationships of organisms or genes. In this dissertation, several new models and methods are developed.^ The rate variation among nucleotide sites: To obtain unbiased estimates of evolutionary distances, the rate heterogeneity among nucleotide sites of a gene should be considered. Commonly, it is assumed that the substitution rate varies among sites according to a gamma distribution (gamma model) or, more generally, an invariant+gamma model which includes some invariable sites. A maximum likelihood (ML) approach was developed for estimating the shape parameter of the gamma distribution $(\alpha)$ and/or the proportion of invariable sites $(\theta).$ Computer simulation showed that (1) under the gamma model, $\alpha$ can be well estimated from 3 or 4 sequences if the sequence length is long; and (2) the distance estimate is unbiased and robust against violations of the assumptions of the invariant+gamma model.^ However, this ML method requires a huge amount of computational time and is useful only for less than 6 sequences. Therefore, I developed a fast method for estimating $\alpha,$ which is easy to implement and requires no knowledge of tree. A computer program was developed for estimating $\alpha$ and evolutionary distances, which can handle the number of sequences as large as 30.^ Evolutionary distances under the stationary, time-reversible (SR) model: The SR model is a general model of nucleotide substitution, which assumes (i) stationary nucleotide frequencies and (ii) time-reversibility. It can be extended to SRV model which allows rate variation among sites. I developed a method for estimating the distance under the SR or SRV model, as well as the variance-covariance matrix of distances. Computer simulation showed that the SR method is better than a simpler method when the sequence length $L>1,000$ bp and is robust against deviations from time-reversibility. As expected, when the rate varies among sites, the SRV method is much better than the SR method.^ The evolutionary distances under nonstationary nucleotide frequencies: The statistical properties of the paralinear and LogDet distances under nonstationary nucleotide frequencies were studied. First, I developed formulas for correcting the estimation biases of the paralinear and LogDet distances. The performances of these formulas and the formulas for sampling variances were examined by computer simulation. Second, I developed a method for estimating the variance-covariance matrix of the paralinear distance, so that statistical tests of phylogenies can be conducted when the nucleotide frequencies are nonstationary. Third, a new method for testing the molecular clock hypothesis was developed in the nonstationary case. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Renal replacement therapy by hemodialysis requires a permanent vascular access. Implantable ports offer a potential alternative to standard vascular access strategies although their development is limited both in number and extent. We explored the fluid dynamics within two new percutaneous bone-anchored dialysis port prototypes, both by in vitro experiments and computer simulation. The new port is to be fixed to bone and allows the connection of a dialysis machine to a central venous catheter via a built-in valve. We found that the pressure drop induced by the two ports was between 20 and 50 mmHg at 500 ml/min, which is comparable with commercial catheter connectors (15–80 mmHg). We observed the formation of vortices in both geometries, and a shear rate in the physiological range (<10,000s-1), which is lower than maximal shear rates reported in commercial catheters (up to 13,000s-1). A difference in surface shear rate of 15% between the two ports was obtained.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Residual acetabular dysplasia is seen in combination with femoral pathomorphologies including an aspherical femoral head and valgus neck-shaft angle with high antetorsion. It is unclear how these femoral pathomorphologies affect range of motion (ROM) and impingement zones after periacetabular osteotomy. QUESTIONS/PURPOSES (1) Does periacetabular osteotomy (PAO) restore the typically excessive ROM in dysplastic hips compared with normal hips; (2) how do impingement locations differ in dysplastic hips before and after PAO compared with normal hips; (3) does a concomitant cam-type morphology adversely affect internal rotation; and (4) does a concomitant varus-derotation intertrochanteric osteotomy (IO) affect external rotation? METHODS Between January 1999 and March 2002, we performed 200 PAOs for dysplasia; of those, 27 hips (14%) met prespecified study inclusion criteria, including availability of a pre- and postoperative CT scan that included the hip and the distal femur. In general, we obtained those scans to evaluate the pre- and postoperative acetabular and femoral morphology, the degree of acetabular reorientation, and healing of the osteotomies. Three-dimensional surface models based on CT scans of 27 hips before and after PAO and 19 normal hips were created. Normal hips were obtained from a population of CT-based computer-assisted THAs using the contralateral hip after exclusion of symptomatic hips or hips with abnormal radiographic anatomy. Using validated and computerized methods, we then determined ROM (flexion/extension, internal- [IR]/external rotation [ER], adduction/abduction) and two motion patterns including the anterior (IR in flexion) and posterior (ER in extension) impingement tests. The computed impingement locations were assigned to anatomical locations of the pelvis and the femur. ROM was calculated separately for hips with (n = 13) and without (n = 14) a cam-type morphology and PAOs with (n = 9) and without (n = 18) a concomitant IO. A post hoc power analysis based on the primary research question with an alpha of 0.05 and a beta error of 0.20 revealed a minimal detectable difference of 4.6° of flexion. RESULTS After PAO, flexion, IR, and adduction/abduction did not differ from the nondysplastic control hips with the numbers available (p ranging from 0.061 to 0.867). Extension was decreased (19° ± 15°; range, -18° to 30° versus 28° ± 3°; range, 19°-30°; p = 0.017) and ER in 0° flexion was increased (25° ± 18°; range, -10° to 41° versus 38° ± 7°; range, 17°-41°; p = 0.002). Dysplastic hips had a higher prevalence of extraarticular impingement at the anteroinferior iliac spine compared with normal hips (48% [13 of 27 hips] versus 5% [one of 19 hips], p = 0.002). A PAO increased the prevalence of impingement for the femoral head from 30% (eight of 27 hips) preoperatively to 59% (16 of 27 hips) postoperatively (p = 0.027). IR in flexion was decreased in hips with a cam-type deformity compared with those with a spherical femoral head (p values from 0.002 to 0.047 for 95°-120° of flexion). A concomitant IO led to a normalization of ER in extension (eg, 37° ± 7° [range, 21°-41°] of ER in 0° of flexion in hips with concomitant IO compared with 38° ± 7° [range, 17°-41°] in nondysplastic control hips; p = 0.777). CONCLUSIONS Using computer simulation of hip ROM, we could show that the PAO has the potential to restore the typically excessive ROM in dysplastic hips. However, a PAO can increase the prevalence of secondary intraarticular impingement of the aspherical femoral head and extraarticular impingement of the anteroinferior iliac spines in flexion and internal rotation. A cam-type morphology can result in anterior impingement with restriction of IR. Additionally, a valgus hip with high antetorsion can result in posterior impingement with decreased ER in extension, which can be normalized with a varus derotation IO of the femur. However, indication of an additional IO needs to be weighed against its inherent morbidity and possible complications. The results are based on a limited number of hips with a pre- and postoperative CT scan after PAO. Future prospective studies are needed to verify the current results based on computer simulation and to test their clinical importance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tiled projector displays are a common choice for training simulators, where a high resolution output image is required. They are cheap for the resolution that they can reach and can be configured in many different ways. Nevertheless, such kinds of displays require geometric and color correction so that the composite image looks seamless. Display correction is an even bigger challenge when the projected images include dark scenes combined with brighter scenes. This is usually a problem for railway simulators when the train is positioned inside a tunnel and the black offset effect becomes noticeable. In this paper, a method for fast photometric and geometric correction of tiled display systems where dark and bright scenes are combined is presented. The image correction is carried out in two steps. First, geometric alignment and overlapping areas attenuation for brighter scenes is applied. Second, in the event of being inside a tunnel, the brightness of the scene is increased in certain areas using light sources in order to create the impression of darkness but minimizing the effect of the black offset

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El microclima urbano juega un rol importante en el consumo energético de los edificios y en las sensaciones de confort en los espacios exteriores. La urgente necesidad de aumentar la eficiencia energética, reducir las emisiones de los contaminantes y paliar la evidente falta de sostenibilidad que afecta a las ciudades, ha puesto la atención en el urbanismo bioclimático como referente para una propuesta de cambio en la forma de diseñar y vivir la ciudad. Hasta ahora las investigaciones en temas de microclima y eficiencia energética se han concentrado principalmente en como orientar el diseño de nuevos desarrollo. Sin embargo los principales problemas de la insostenibilidad de las actuales conurbaciones son el resultado del modelo de crecimiento especulativo y altamente agotador de recursos que han caracterizado el boom inmobiliario de las últimas décadas. Vemos entonces, tanto en España como en el resto de los Países Europeos, la necesidad de reorientar el sector de la construcción hacía la rehabilitación del espacio construido, como una alternativa capaz de dar una solución más sostenible para el mercado inmobiliario. En este propósito de mejorar la calidad de las ciudades actuales, el espacio público juega un papel fundamental, sobre todo como lugar para el encuentro y la socialización entre los ciudadanos. La sensación térmica condiciona la percepción de un ambiente, así que el microclima puede ser determinante para el éxito o el fracaso de un espacio urbano. Se plantea entonces cómo principal objetivo de la investigación, la definición de estrategias para el diseño bioclimático de los entornos urbanos construidos, fundamentados en las componentes morfotipológica, climática y de los requerimientos de confort para los ciudadanos. Como ulterior elemento de novedad se decide estudiar la rehabilitación de los barrios de construcción de mediado del siglo XX, que en muchos casos constituyen bolsas de degrado en la extendida periferia de las ciudades modernas. La metodología empleada para la investigación se basa en la evaluación de las condiciones climáticas y de confort térmico de diferentes escenarios de proyecto, aplicados a tres casos de estudio situados en un barrio periurbano de la ciudad de Madrid. Para la determinación de los parámetros climáticos se han empleado valores obtenidos con un proceso de simulación computarizada, basados en los principios de fluidodinámica, termodinámica y del intercambio radioactivo en el espacio construido. A través de uso de programas de simulación podemos hacer una previsión de las condiciones microclimáticas de las situaciones actuales y de los efectos de la aplicación de medidas. La gran ventaja en el uso de sistemas de cálculo es que se pueden evaluar diferentes escenarios de proyecto y elegir entre ellos el que asegura mejores prestaciones ambientales. Los resultados obtenidos en los diferentes escenarios han sido comparados con los valores de confort del estado actual, utilizando como indicador de la sensación térmica el índice UTCI. El análisis comparativo ha permitido la realización de una tabla de resumen donde se muestra la evaluación de las diferentes soluciones de rehabilitación. Se ha podido así demostrar que no existe una solución constructiva eficaz para todas las aplicaciones, sino que cada situación debe ser estudiada individualmente, aplicando caso por caso las medidas más oportunas. Si bien los sistemas de simulación computarizada pueden suponer un importante apoyo para la fase de diseño, es responsabilidad del proyectista emplear las herramientas más adecuadas en cada fase y elegir las soluciones más oportunas para cumplir con los objetivos del proyecto. The urban microclimate plays an important role on buildings energy consumption and comfort sensation in exterior spaces. Nowadays, cities need to increase energy efficiency, reduce the pollutants emissions and mitigate the evident lack of sustainability. In reason of this, attention has focused on the bioclimatic urbanism as a reference of change proposal of the way to design and live the city. Hitherto, the researches on microclimate and energy efficiency have mainly concentrated on guiding the design of new constructions. However the main problems of unsustainability of existing conurbations are the result of the growth model highly speculative and responsible of resources depletion that have characterized the real estate boom of recent decades. In Spain and other European countries, become define the need to redirect the construction sector towards urban refurbishment. This alternative is a more sustainable development model and is able to provide a solution for the real estate sector. In order to improve the quality of today's cities, the public space plays a key role, especially in order to provide to citizens places for meeting and socializing. The thermal sensation affects the environment perception, so microclimate conditions can be decisive for the success or failure of urban space. For this reasons, the main objective of this work is focused on the definition of bioclimatic strategies for existing urban spaces, based on the morpho-typological components, climate and comfort requirements for citizens. As novelty element, the regeneration of neighborhoods built in middle of the twentieth century has been studied, because are the major extended in periphery of modern cities and, in many cases, they represent deprived areas. The research methodology is based on the evaluation of climatic conditions and thermal comfort of different project scenarios, applied to three case studies located in a suburban neighborhood of Madrid. The climatic parameters have been obtained by computer simulation process, based on fluid dynamics, thermodynamics and radioactive exchange in urban environment using numerical approach. The great advantage in the use of computing systems is the capacity for evaluate different project scenarios. The results in the different scenarios were compared with the comfort value obtained in the current state, using the UTCI index as indicator of thermal sensation. Finally, an abacus of the thermal comfort improvement obtained by different countermeasures has been performed. One of the major achievement of doctoral work is the demonstration of there are not any design solution suitable for different cases. Each situation should be analyzed and specific design measures should be proposed. Computer simulation systems can be a significant support and help the designer in the decision making phase. However, the election of the most suitable tools and the appropriate solutions for each case is designer responsibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The extraordinary increase of new information technologies, the development of Internet, the electronic commerce, the e-government, mobile telephony and future cloud computing and storage, have provided great benefits in all areas of society. Besides these, there are new challenges for the protection of information, such as the loss of confidentiality and integrity of electronic documents. Cryptography plays a key role by providing the necessary tools to ensure the safety of these new media. It is imperative to intensify the research in this area, to meet the growing demand for new secure cryptographic techniques. The theory of chaotic nonlinear dynamical systems and the theory of cryptography give rise to the chaotic cryptography, which is the field of study of this thesis. The link between cryptography and chaotic systems is still subject of intense study. The combination of apparently stochastic behavior, the properties of sensitivity to initial conditions and parameters, ergodicity, mixing, and the fact that periodic points are dense, suggests that chaotic orbits resemble random sequences. This fact, and the ability to synchronize multiple chaotic systems, initially described by Pecora and Carroll, has generated an avalanche of research papers that relate cryptography and chaos. The chaotic cryptography addresses two fundamental design paradigms. In the first paradigm, chaotic cryptosystems are designed using continuous time, mainly based on chaotic synchronization techniques; they are implemented with analog circuits or by computer simulation. In the second paradigm, chaotic cryptosystems are constructed using discrete time and generally do not depend on chaos synchronization techniques. The contributions in this thesis involve three aspects about chaotic cryptography. The first one is a theoretical analysis of the geometric properties of some of the most employed chaotic attractors for the design of chaotic cryptosystems. The second one is the cryptanalysis of continuos chaotic cryptosystems and finally concludes with three new designs of cryptographically secure chaotic pseudorandom generators. The main accomplishments contained in this thesis are: v Development of a method for determining the parameters of some double scroll chaotic systems, including Lorenz system and Chua’s circuit. First, some geometrical characteristics of chaotic system have been used to reduce the search space of parameters. Next, a scheme based on the synchronization of chaotic systems was built. The geometric properties have been employed as matching criterion, to determine the values of the parameters with the desired accuracy. The method is not affected by a moderate amount of noise in the waveform. The proposed method has been applied to find security flaws in the continuous chaotic encryption systems. Based on previous results, the chaotic ciphers proposed by Wang and Bu and those proposed by Xu and Li are cryptanalyzed. We propose some solutions to improve the cryptosystems, although very limited because these systems are not suitable for use in cryptography. Development of a method for determining the parameters of the Lorenz system, when it is used in the design of two-channel cryptosystem. The method uses the geometric properties of the Lorenz system. The search space of parameters has been reduced. Next, the parameters have been accurately determined from the ciphertext. The method has been applied to cryptanalysis of an encryption scheme proposed by Jiang. In 2005, Gunay et al. proposed a chaotic encryption system based on a cellular neural network implementation of Chua’s circuit. This scheme has been cryptanalyzed. Some gaps in security design have been identified. Based on the theoretical results of digital chaotic systems and cryptanalysis of several chaotic ciphers recently proposed, a family of pseudorandom generators has been designed using finite precision. The design is based on the coupling of several piecewise linear chaotic maps. Based on the above results a new family of chaotic pseudorandom generators named Trident has been designed. These generators have been specially designed to meet the needs of real-time encryption of mobile technology. According to the above results, this thesis proposes another family of pseudorandom generators called Trifork. These generators are based on a combination of perturbed Lagged Fibonacci generators. This family of generators is cryptographically secure and suitable for use in real-time encryption. Detailed analysis shows that the proposed pseudorandom generator can provide fast encryption speed and a high level of security, at the same time. El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de Internet, el comercio electrónico, la administración electrónica, la telefonía móvil y la futura computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección de la información, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos electrónicos. La criptografía juega un papel fundamental aportando las herramientas necesarias para garantizar la seguridad de estos nuevos medios, pero es imperativo intensificar la investigación en este ámbito para dar respuesta a la demanda creciente de nuevas técnicas criptográficas seguras. La teoría de los sistemas dinámicos no lineales junto a la criptografía dan lugar a la ((criptografía caótica)), que es el campo de estudio de esta tesis. El vínculo entre la criptografía y los sistemas caóticos continúa siendo objeto de un intenso estudio. La combinación del comportamiento aparentemente estocástico, las propiedades de sensibilidad a las condiciones iniciales y a los parámetros, la ergodicidad, la mezcla, y que los puntos periódicos sean densos asemejan las órbitas caóticas a secuencias aleatorias, lo que supone su potencial utilización en el enmascaramiento de mensajes. Este hecho, junto a la posibilidad de sincronizar varios sistemas caóticos descrita inicialmente en los trabajos de Pecora y Carroll, ha generado una avalancha de trabajos de investigación donde se plantean muchas ideas sobre la forma de realizar sistemas de comunicaciones seguros, relacionando así la criptografía y el caos. La criptografía caótica aborda dos paradigmas de diseño fundamentales. En el primero, los criptosistemas caóticos se diseñan utilizando circuitos analógicos, principalmente basados en las técnicas de sincronización caótica; en el segundo, los criptosistemas caóticos se construyen en circuitos discretos u ordenadores, y generalmente no dependen de las técnicas de sincronización del caos. Nuestra contribución en esta tesis implica tres aspectos sobre el cifrado caótico. En primer lugar, se realiza un análisis teórico de las propiedades geométricas de algunos de los sistemas caóticos más empleados en el diseño de criptosistemas caóticos vii continuos; en segundo lugar, se realiza el criptoanálisis de cifrados caóticos continuos basados en el análisis anterior; y, finalmente, se realizan tres nuevas propuestas de diseño de generadores de secuencias pseudoaleatorias criptográficamente seguros y rápidos. La primera parte de esta memoria realiza un análisis crítico acerca de la seguridad de los criptosistemas caóticos, llegando a la conclusión de que la gran mayoría de los algoritmos de cifrado caóticos continuos —ya sean realizados físicamente o programados numéricamente— tienen serios inconvenientes para proteger la confidencialidad de la información ya que son inseguros e ineficientes. Asimismo una gran parte de los criptosistemas caóticos discretos propuestos se consideran inseguros y otros no han sido atacados por lo que se considera necesario más trabajo de criptoanálisis. Esta parte concluye señalando las principales debilidades encontradas en los criptosistemas analizados y algunas recomendaciones para su mejora. En la segunda parte se diseña un método de criptoanálisis que permite la identificaci ón de los parámetros, que en general forman parte de la clave, de algoritmos de cifrado basados en sistemas caóticos de Lorenz y similares, que utilizan los esquemas de sincronización excitador-respuesta. Este método se basa en algunas características geométricas del atractor de Lorenz. El método diseñado se ha empleado para criptoanalizar eficientemente tres algoritmos de cifrado. Finalmente se realiza el criptoanálisis de otros dos esquemas de cifrado propuestos recientemente. La tercera parte de la tesis abarca el diseño de generadores de secuencias pseudoaleatorias criptográficamente seguras, basadas en aplicaciones caóticas, realizando las pruebas estadísticas, que corroboran las propiedades de aleatoriedad. Estos generadores pueden ser utilizados en el desarrollo de sistemas de cifrado en flujo y para cubrir las necesidades del cifrado en tiempo real. Una cuestión importante en el diseño de sistemas de cifrado discreto caótico es la degradación dinámica debida a la precisión finita; sin embargo, la mayoría de los diseñadores de sistemas de cifrado discreto caótico no ha considerado seriamente este aspecto. En esta tesis se hace hincapié en la importancia de esta cuestión y se contribuye a su esclarecimiento con algunas consideraciones iniciales. Ya que las cuestiones teóricas sobre la dinámica de la degradación de los sistemas caóticos digitales no ha sido totalmente resuelta, en este trabajo utilizamos algunas soluciones prácticas para evitar esta dificultad teórica. Entre las técnicas posibles, se proponen y evalúan varias soluciones, como operaciones de rotación de bits y desplazamiento de bits, que combinadas con la variación dinámica de parámetros y con la perturbación cruzada, proporcionan un excelente remedio al problema de la degradación dinámica. Además de los problemas de seguridad sobre la degradación dinámica, muchos criptosistemas se rompen debido a su diseño descuidado, no a causa de los defectos esenciales de los sistemas caóticos digitales. Este hecho se ha tomado en cuenta en esta tesis y se ha logrado el diseño de generadores pseudoaleatorios caóticos criptogr áficamente seguros.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new fault detection and isolation scheme for dealing with simultaneous additive and parametric faults. The new design integrates a system for additive fault detection based on Castillo and Zufiria, 2009 and a new parametric fault detection and isolation scheme inspired in Munz and Zufiria, 2008 . It is shown that the so far existing schemes do not behave correctly when both additive and parametric faults occur simultaneously; to solve the problem a new integrated scheme is proposed. Computer simulation results are presented to confirm the theoretical studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Irradiation with swift heavy ions (SHI), roughly defined as those having atomic masses larger than 15 and energies exceeding 1 MeV/amu, may lead to significant modification of the irradiated material in a nanometric region around the (straight) ion trajectory (latent tracks). In the case of amorphous silica, SHI irradiation originates nano-tracks of higher density than the virgin material (densification). As a result, the refractive index is increased with respect to that of the surroundings. Moreover, track overlapping leads to continuous amorphous layers that present a significant contrast with respect to the pristine substrate. We have recently demonstrated that SHI irradiation produces a large number of point defects, easily detectable by a number of experimental techniques (work presented in the parallel conference ICDIM). The mechanisms of energy transfer from SHI to the target material have their origin in the high electronic excitation induced in the solid. A number of phenomenological approaches have been employed to describe these mechanisms: coulomb explosion, thermal spike, non-radiative exciton decay, bond weakening. However, a detailed microscopic description is missing due to the difficulty of modeling the time evolution of the electronic excitation. In this work we have employed molecular dynamics (MD) calculations to determine whether the irradiation effects are related to the thermal phenomena described by MD (in the ps domain) or to electronic phenomena (sub-ps domain), e.g., exciton localization. We have carried out simulations of up to 100 ps with large boxes (30x30x8 nm3) using a home-modified version of MDCASK that allows us to define a central hot cylinder (ion track) from which heat flows to the surrounding cold bath (unirradiated sample). We observed that once the cylinder has cooled down, the Si and O coordination numbers are 4 and 2, respectively, as in virgin silica. On the other hand, the density of the (cold) cylinder increases with respect to that of silica and, furthermore, the silica network ring size decreases. Both effects are in agreement with the observed densification. In conclusion, purely thermal effects do not explain the generation of point defects upon irradiation, but they do account for the silica densification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kinetic Monte Carlo (KMC) is a widely used technique to simulate the evolution of radiation damage inside solids. Despite de fact that this technique was developed several decades ago, there is not an established and easy to access simulating tool for researchers interested in this field, unlike in the case of molecular dynamics or density functional theory calculations. In fact, scientists must develop their own tools or use unmaintained ones in order to perform these types of simulations. To fulfil this need, we have developed MMonCa, the Modular Monte Carlo simulator. MMonCa has been developed using professional C++ programming techniques and has been built on top of an interpreted language to allow having a powerful yet flexible, robust but customizable and easy to access modern simulator. Both non lattice and Lattice KMC modules have been developed. We will present in this conference, for the first time, the MMonCa simulator. Along with other (more detailed) contributions in this meeting, the versatility of MMonCa to study a number of problems in different materials (particularly, Fe and W) subject to a wide range of conditions will be shown. Regarding KMC simulations, we have studied neutron-generated cascade evolution in Fe (as a model material). Starting with a Frenkel pair distribution we have followed the defect evolution up to 450 K. Comparison with previous simulations and experiments shows excellent agreement. Furthermore, we have studied a more complex system (He-irradiated W:C) using a previous parametrization [1]. He-irradiation at 4 K followed by isochronal annealing steps up to 500 K has been simulated with MMonCa. The He energy was 400 eV or 3 keV. In the first case, no damage is associated to the He implantation, whereas in the second one, a significant Frenkel pair concentration (evolving into complex clusters) is associated to the He ions. We have been able to explain He desorption both in the absence and in the presence of Frenkel pairs and we have also applied MMonCa to high He doses and fluxes at elevated temperatures. He migration and trapping dominate the kinetics of He desorption. These processes will be discussed and compared to experimental results. [1] C.S. Becquart et al. J. Nucl. Mater. 403 (2010) 75

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Helium retention in irradiated tungsten leads to swelling, pore formation, sample exfoliation and embrittlement with deleterious consequences in many applications. In particular, the use of tungsten in future nuclear fusion plants is proposed due to its good refractory properties. However, serious concerns about tungsten survivability stems from the fact that it must withstand severe irradiation conditions. In magnetic fusion as well as in inertial fusion (particularly with direct drive targets), tungsten components will be exposed to low and high energy ion (helium) irradiation, respectively. A common feature is that the most detrimental situations will take place in pulsed mode, i.e., high flux irradiation. There is increasing evidence on a correlation between a high helium flux and an enhancement of detrimental effects on tungsten. Nevertheless, the nature of these effects is not well understood due to the subtleties imposed by the exact temperature profile evolution, ion energy, pulse duration, existence of impurities and simultaneous irradiation with other species. Physically based Kinetic Monte Carlo is the technique of choice to simulate the evolution of radiation-induced damage inside solids in large temporal and space scales. We have used the recently developed code MMonCa (Modular Monte Carlo simulator), presented in this conference for the first time, to study He retention (and in general defect evolution) in tungsten samples irradiated with high intensity helium pulses. The code simulates the interactions among a large variety of defects and impurities (He and C) during the irradiation stage and the subsequent annealing steps. In addition, it allows us to vary the sample temperature to follow the severe thermo-mechanical effects of the pulses. In this work we will describe the helium kinetics for different irradiation conditions. A competition is established between fast helium cluster migration and trapping at large defects, being the temperature a determinant factor. In fact, high temperatures (induced by the pulses) are responsible for large vacancy cluster formation and subsequent additional trapping with respect to low flux irradiation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper an approach to the synchronization of chaotic circuits has been reported. It is based on an optically programmable logic cell and the signals involved are fully digital. It is based on the reception of the same input signal on sender and receiver and from this approach, with a posterior correlation between both outputs, an identical chaotic output is obtained in both systems. No conversion from analog to digital signals is needed. The model here presented is based on a computer simulation.