888 resultados para Discrete Time Branching Processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although associated with adverse outcomes in other cardiopulmonary diseases, limited evidence exists on the prognostic value of anaemia in patients with acute pulmonary embolism (PE). We sought to examine the associations between anaemia and mortality and length of hospital stay in patients with PE. We evaluated 14,276 patients with a primary diagnosis of PE from 186 hospitals in Pennsylvania, USA. We used random-intercept logistic regression to assess the association between anaemia at the time of presentation and 30-day mortality and discrete-time logistic hazard models to assess the association between anaemia and time to hospital discharge, adjusting for patient (age, gender, race, insurance type, clinical and laboratory variables) and hospital (region, size, teaching status) factors. Anaemia was present in 38.7% of patients at admission. Patients with anaemia had a higher 30-day mortality (13.7% vs. 6.3%; p <0.001) and a longer length of stay (geometric mean, 6.9 vs. 6.6 days; p <0.001) compared to patients without anaemia. In multivariable analyses, anaemia remained associated with an increased odds of death (OR 1.82, 95% CI: 1.60-2.06) and a decreased odds of discharge (OR 0.85, 95% CI: 0.82-0.89). Anaemia is very common in patients presenting with PE and is independently associated with an increased short-term mortality and length of stay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper treats the problem of setting the inventory level and optimizing the buffer allocation of closed-loop flow lines operating under the constant-work-in-process (CONWIP) protocol. We solve a very large but simple linear program that models an entire simulation run of a closed-loop flow line in discrete time to determine a production rate estimate of the system. This approach introduced in Helber, Schimmelpfeng, Stolletz, and Lagershausen (2011) for open flow lines with limited buffer capacities is extended to closed-loop CONWIP flow lines. Via this method, both the CONWIP level and the buffer allocation can be optimized simultaneously. The first part of a numerical study deals with the accuracy of the method. In the second part, we focus on the relationship between the CONWIP inventory level and the short-term profit. The accuracy of the method turns out to be best for such configurations that maximize production rate and/or short-term profit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Based on a nation-wide database, this study analysed the influence of methotrexate (MTX), TNF inhibitors and a combination of the two on uveitis occurrence in JIA patients. METHODS Data from the National Paediatric Rheumatological Database in Germany were used in this study. Between 2002 and 2013, data from JIA patients were annually documented at the participating paediatric rheumatological sites. Patients with JIA disease duration of less than 12 months at initial documentation and ≥2 years of follow-up were included in this study. The impact of anti-inflammatory treatment on the occurrence of uveitis was evaluated by discrete-time survival analysis. RESULTS A total of 3,512 JIA patients (mean age 8.3±4.8 years, female 65.7%, ANA-positive 53.2%, mean age at arthritis onset 7.8±4.8 years) fulfilled the inclusion criteria. Mean total follow-up time was 3.6±2.4 years. Uveitis developed in a total of 180 patients (5.1%) within one year after arthritis onset. Uveitis onset after the first year was observed in another 251 patients (7.1%). DMARD treatment in the year before uveitis onset significantly reduced the risk for uveitis: MTX (HR 0.63, p=0.022), TNF inhibitors (HR 0.56, p<0.001) and a combination of the two (HR 0.10, p<0.001). Patients treated with MTX within the first year of JIA had an even a lower uveitis risk (HR 0.29, p<0.001). CONCLUSION The use of DMARDs in JIA patients significantly reduced the risk for uveitis onset. Early MTX use within the first year of disease and the combination of MTX with a TNF inhibitor had the highest protective effect. This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods are developed which assess survival data for two attributes; (1) prolongation of life, (2) quality of life. Health state transition probabilities correspond to prolongation of life and are modeled as a discrete-time semi-Markov process. Imbedded within the sojourn time of a particular health state are the quality of life transitions. They reflect events which differentiate perceptions of pain and suffering over a fixed time period. Quality of life transition probabilities are derived from the assumptions of a simple Markov process. These probabilities depend on the health state currently occupied and the next health state to which a transition is made. Utilizing the two forms of attributes the model has the capability to estimate the distribution of expected quality adjusted life years (in addition to the distribution of expected survival times). The expected quality of life can also be estimated within the health state sojourn time making more flexible the assessment of utility preferences. The methods are demonstrated on a subset of follow-up data from the Beta Blocker Heart Attack Trial (BHAT). This model contains the structure necessary to make inferences when assessing a general survival problem with a two dimensional outcome. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Results of 40Ar-39Ar Ar dating constrain the age of the submerged volcanic succession, part of the seaward-dipping reflector sequence of the Southeast Greenland volcanic rifted margin, recovered during Leg 163. At the 63ºN drilling transect, the fully normally magnetized volcanic units at Holes 989B (Unit 1) and 990A (Units 1 and 2) are dated at 57.1 ± 1.3 Ma and 55.6 ± 0.6 Ma, respectively. This correlates with a common magnetochron, C25n. The underlying, reversely magnetized lavas at Hole 990A (Units 3-13) yield an average age of 55.8 ± 0.7 Ma and may correlate with C25r. The argon data, however, are also consistent with eruption of the lavas at Site 990 during the very earliest portion of C24. If so, the normally polarized units have to be correlated to a cryptochron (e.g., C24r-11 at ~55.57 Ma). The lavas at Holes 989B and 990A have typical oceanic compositions, implying that final plate separation between Greenland and northwest Europe took place at ~56 Ma. The age for Hole 989B lava is younger than expected from the seismic interpretations, posing questions about the structural evolution of the margin. An age of 49.6 ± 0.2 Ma for the basaltic lava at Site 988 (~66ºN) points to the importance of postbreakup tholeiitic magmatism at the rifted margin. Together with results from Leg 152, a virtually complete time frame for ~12 m.y. of pre-, syn-, and postbreakup volcanism during rifted margin evolution in Southeast Greenland can now be assembled. This time frame includes continental type volcanism at ~61-60 Ma, synbreakup volcanism beginning at ~57 Ma, and postbreakup volcanism at ~49.6 Ma. These discrete time windows coincide with distinct periods of tholeiitic magmatism from the onshore East Greenland Tertiary Igneous Province and is consistent with discrete mantle-melting events triggered by plume arrival (~61-60 Ma) under central Greenland, continental breakup (~57-54 Ma), and passage of the plume axis beneath the East Greenland rifted margin after breakup (~50-49 Ma), respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A knowledge of rock stress is fundamental for improving our understanding of oceanic crustal mechanisms and lithospheric dynamic processes. However, direct measurements of stress in the deep oceans, and in particular stress magnitudes, have proved to be technically difficult. Anelastic strain recovery measurements were conducted on 15 basalt core samples from Sites 765 and 766 during Leg 123. Three sets of experiments were performed: anelastic strain recovery monitoring, dynamic elastic property measurements, and thermal azimuthal anisotropy observations. In addition, a range of other tests and observations were recorded to characterize each of the samples. One common feature of the experimental results and observations is that apparently no consistent orientation trend exists, either between the different measurements on each core sample or between the same sets of measurements on the various core samples. However, some evidence of correspondence between velocity anisotropy and anelastic strain recovery exists, but this is not consistent for all the core samples investigated. Thermal azimuthal anisotropy observations, although showing no conclusive correlations with the other results, were of significant interest in that they clearly exhibited anisotropic behavior. The apparent reproducibility of this behavior may point toward the possibility of rocks that retain a "memory" of their stress history, which could be exploited to derive stress orientations from archived core. Anelastic strain recovery is a relatively new technique. Because use of the method has extended to a wider range of rock types, the literature has begun to include examples of rocks that contracted with time. Strong circumstantial evidence exists to suggest that core-sample contractions result from the slow diffusion of pore fluids from a preexisting microcrack structure that permits the rock to deflate at a greater rate than the expansion caused by anelastic strain recovery. Both expansions and contractions of the Leg 123 cores were observed. The basalt cores have clearly been intersected by an abundance of preexisting fractures, some of which pass right through the samples, but many are intercepted or terminate within the rock matrix. Thus, the behavior of the core samples will be influenced not only by the properties of the rock matrix between the fractures, but also by how these macro- and micro-scale fractures mutually interact. The strain-recovery curves recorded during Leg 123 for each of the 15 basalt core samples may reflect the result of two competing time dependent processes: anelastic strain recovery and pore pressure recovery. Were these the only two processes to influence the gauge responses, then one might expect that given the additional information required, established theoretical models might be used to determine consistent stress orientations and reliable stress magnitudes. However, superimposed upon these competing processes is their respective interaction with the preexisting fractures that intersect each core. Evidence from our experiments and observations suggests that these fractures have a dominating influence on the characteristics of the recovery curves and that their effects are complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a reconstruction of El Niño Southern Oscillation (ENSO) variability spanning the Medieval Climate Anomaly (MCA, A.D. 800-1300) and the Little Ice Age (LIA, A.D. 1500-1850). Changes in ENSO are estimated by comparing the spread and symmetry of d18O values of individual specimens of the thermocline-dwelling planktonic foraminifer Pulleniatina obliquiloculata extracted from discrete time horizons of a sediment core collected in the Sulawesi Sea, at the edge of the western tropical Pacific warm pool. The spread of individual d18O values is interpreted to be a measure of the strength of both phases of ENSO while the symmetry of the d18O distributions is used to evaluate the relative strength/frequency of El Niño and La Niña events. In contrast to previous studies, we use robust and resistant statistics to quantify the spread and symmetry of the d18O distributions; an approach motivated by the relatively small sample size and the presence of outliers. Furthermore, we use a pseudo-proxy approach to investigate the effects of the different paleo-environmental factors on the statistics of the d18O distributions, which could bias the paleo-ENSO reconstruction. We find no systematic difference in the magnitude/strength of ENSO during the Northern Hemisphere MCA or LIA. However, our results suggest that ENSO during the MCA was skewed toward stronger/more frequent La Niña than El Niño, an observation consistent with the medieval megadroughts documented from sites in western North America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A late Albian-early Cenomanian record (~103.3 to 99.0 Ma), including organic-rich deposits and a d13C increase associated with oceanic anoxic event 1d (OAE 1d), is described from Ocean Drilling Program sites 1050 and 1052 in the subtropical Atlantic. Foraminifera are well preserved at these sites. Paleotemperatures estimated from benthic d18O values average ~14°C for middle bathyal Site 1050 and ~17°C for upper bathyal Site 1052, whereas surface temperatures are estimated to have ranged from 26°C to 31°C at both sites. Among planktonic foraminifera, there is a steady balance of speciation and extinction with no discrete time of major faunal turnover. OAE 1d is recognized on the basis of a 1.2 per mill d13C increase (~100.0-99.6 Ma), which is similar in age and magnitude to d13C excursions documented in the North Atlantic and western Tethys. Organic-rich "black shales" are present throughout the studied interval at both sites. However, deposition of individual black shale beds was not synchronous between sites, and most of the black shale was deposited before the OAE 1d d13C increase. A similar pattern is observed at the other sites where OAE 1d has been recognized indicating that the site(s) of excess organic carbon burial that could have caused the d13C increase has (have) yet to be found. Our findings add weight to the view that OAEs should be chemostratigraphically (d13C) rather than lithostratigraphically defined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding phosphorus (P) geochemistry and burial in oceanic sediments is important because of the role of P for modulating oceanic productivity on long timescales. We investigated P geochemistry in seven equatorial Pacific sites over the last 53 Ma, using a sequential extraction technique to elucidate sedimentary P composition and P diagenesis within the sediments. The dominant P-bearing component in these sediments is authigenic P (61-86% of total P), followed in order of relative dominance by iron-bound P (7-17%), organic P (3-12%), adsorbed P (2-9%), and detrital P (0-1%). Clear temporal trends in P component composition exist. Organic P decreases rapidly in younger sediments in the eastern Pacific (the only sites with high sample resolution in the younger intervals), from a mean concentration of 2.3 µmol P/g sediment in the 0-1 Ma interval to 0.4 µmol/g in the 5- 6 Ma interval. Over this same time interval, decreases are also observed for iron-bound P (from 2.1 to 1.1 µmol P/g) and adsorbed P (from 1.5 to 0.7 µmol P/g). These decreases are in contrast to increases in authigenic P (from 6.0-9.6 µmol P/g) and no significant changes in detrital P (0.1 µmol P/g) and total P (12 µmol P/g). These temporal trends in P geochemistry suggest that (1) organic matter, the principal shuttle of P to the seafloor, is regenerated in sediments and releases associated P to interstitial waters, (2) P associated with iron-rich oxyhydroxides is released to interstitial waters upon microbial iron reduction, (3) the decrease in adsorbed P with age and depth probably indicates a similar decrease in interstitial water P concentrations, and (4) carbonate fluorapatite (CFA), or another authigenic P-bearing phase, precipitates due to the release of P from organic matter and iron oxyhydroxides and becomes an increasingly significant P sink with age and depth. The reorganization of P between various sedimentary pools, and its eventual incorporation in CFA, has been recognized in a variety of continental margin environments, but this is the first time these processes have been revealed in deep-sea sediments. Phosphorus accumulation rate data from this study and others indicates that the global pre-anthropogenic input rate of P to the ocean (20x10**10 mol P/yr) is about a factor of four times higher than previously thought, supporting recent suggestions that the residence time of P in the oceans may be as short as 10000-20000 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the duration of intermediate goods imports and its determinants for Japanese affiliates in China. Our estimations, using a unique parent-affiliate-transaction matched panel dataset for a discrete-time hazard model over the 2000–2006 period, reveal that products with a higher upstreamness index, differentiated goods, and goods traded under processing trade are less likely to be substituted with local procurement. Firms located in more agglomerated regions with more foreign affiliates tend to shorten the duration of imports from the home country. For parent-firm characteristics, multinational enterprises that have many foreign affiliates or longer foreign production experience import intermediate goods for a longer duration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to increase current empirical evidence on the relevance of real options for explaining firm investment decisions in oligopolistic markets. We study an actual investment case in the Spanish mobile telephony industry, the entrant in the market of a new operator, Yoigo. We analyze the option to abandon in order to show the relevance of the possibility of selling the company in an oligopolistic market where competitors are not allowed free entrance. The NPV (net present value) of the new entrant is calculated as a starting point. Then, based on the general approach proposed by Copeland and Antikarov (2001), a binomial tree is used to model managerial flexibility in discrete time periods, and value the option to abandon. The strike price of the option is calculated based on incremental EBITDA margins due to selling customers or merging with a competitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The extraordinary increase of new information technologies, the development of Internet, the electronic commerce, the e-government, mobile telephony and future cloud computing and storage, have provided great benefits in all areas of society. Besides these, there are new challenges for the protection of information, such as the loss of confidentiality and integrity of electronic documents. Cryptography plays a key role by providing the necessary tools to ensure the safety of these new media. It is imperative to intensify the research in this area, to meet the growing demand for new secure cryptographic techniques. The theory of chaotic nonlinear dynamical systems and the theory of cryptography give rise to the chaotic cryptography, which is the field of study of this thesis. The link between cryptography and chaotic systems is still subject of intense study. The combination of apparently stochastic behavior, the properties of sensitivity to initial conditions and parameters, ergodicity, mixing, and the fact that periodic points are dense, suggests that chaotic orbits resemble random sequences. This fact, and the ability to synchronize multiple chaotic systems, initially described by Pecora and Carroll, has generated an avalanche of research papers that relate cryptography and chaos. The chaotic cryptography addresses two fundamental design paradigms. In the first paradigm, chaotic cryptosystems are designed using continuous time, mainly based on chaotic synchronization techniques; they are implemented with analog circuits or by computer simulation. In the second paradigm, chaotic cryptosystems are constructed using discrete time and generally do not depend on chaos synchronization techniques. The contributions in this thesis involve three aspects about chaotic cryptography. The first one is a theoretical analysis of the geometric properties of some of the most employed chaotic attractors for the design of chaotic cryptosystems. The second one is the cryptanalysis of continuos chaotic cryptosystems and finally concludes with three new designs of cryptographically secure chaotic pseudorandom generators. The main accomplishments contained in this thesis are: v Development of a method for determining the parameters of some double scroll chaotic systems, including Lorenz system and Chua’s circuit. First, some geometrical characteristics of chaotic system have been used to reduce the search space of parameters. Next, a scheme based on the synchronization of chaotic systems was built. The geometric properties have been employed as matching criterion, to determine the values of the parameters with the desired accuracy. The method is not affected by a moderate amount of noise in the waveform. The proposed method has been applied to find security flaws in the continuous chaotic encryption systems. Based on previous results, the chaotic ciphers proposed by Wang and Bu and those proposed by Xu and Li are cryptanalyzed. We propose some solutions to improve the cryptosystems, although very limited because these systems are not suitable for use in cryptography. Development of a method for determining the parameters of the Lorenz system, when it is used in the design of two-channel cryptosystem. The method uses the geometric properties of the Lorenz system. The search space of parameters has been reduced. Next, the parameters have been accurately determined from the ciphertext. The method has been applied to cryptanalysis of an encryption scheme proposed by Jiang. In 2005, Gunay et al. proposed a chaotic encryption system based on a cellular neural network implementation of Chua’s circuit. This scheme has been cryptanalyzed. Some gaps in security design have been identified. Based on the theoretical results of digital chaotic systems and cryptanalysis of several chaotic ciphers recently proposed, a family of pseudorandom generators has been designed using finite precision. The design is based on the coupling of several piecewise linear chaotic maps. Based on the above results a new family of chaotic pseudorandom generators named Trident has been designed. These generators have been specially designed to meet the needs of real-time encryption of mobile technology. According to the above results, this thesis proposes another family of pseudorandom generators called Trifork. These generators are based on a combination of perturbed Lagged Fibonacci generators. This family of generators is cryptographically secure and suitable for use in real-time encryption. Detailed analysis shows that the proposed pseudorandom generator can provide fast encryption speed and a high level of security, at the same time. El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de Internet, el comercio electrónico, la administración electrónica, la telefonía móvil y la futura computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección de la información, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos electrónicos. La criptografía juega un papel fundamental aportando las herramientas necesarias para garantizar la seguridad de estos nuevos medios, pero es imperativo intensificar la investigación en este ámbito para dar respuesta a la demanda creciente de nuevas técnicas criptográficas seguras. La teoría de los sistemas dinámicos no lineales junto a la criptografía dan lugar a la ((criptografía caótica)), que es el campo de estudio de esta tesis. El vínculo entre la criptografía y los sistemas caóticos continúa siendo objeto de un intenso estudio. La combinación del comportamiento aparentemente estocástico, las propiedades de sensibilidad a las condiciones iniciales y a los parámetros, la ergodicidad, la mezcla, y que los puntos periódicos sean densos asemejan las órbitas caóticas a secuencias aleatorias, lo que supone su potencial utilización en el enmascaramiento de mensajes. Este hecho, junto a la posibilidad de sincronizar varios sistemas caóticos descrita inicialmente en los trabajos de Pecora y Carroll, ha generado una avalancha de trabajos de investigación donde se plantean muchas ideas sobre la forma de realizar sistemas de comunicaciones seguros, relacionando así la criptografía y el caos. La criptografía caótica aborda dos paradigmas de diseño fundamentales. En el primero, los criptosistemas caóticos se diseñan utilizando circuitos analógicos, principalmente basados en las técnicas de sincronización caótica; en el segundo, los criptosistemas caóticos se construyen en circuitos discretos u ordenadores, y generalmente no dependen de las técnicas de sincronización del caos. Nuestra contribución en esta tesis implica tres aspectos sobre el cifrado caótico. En primer lugar, se realiza un análisis teórico de las propiedades geométricas de algunos de los sistemas caóticos más empleados en el diseño de criptosistemas caóticos vii continuos; en segundo lugar, se realiza el criptoanálisis de cifrados caóticos continuos basados en el análisis anterior; y, finalmente, se realizan tres nuevas propuestas de diseño de generadores de secuencias pseudoaleatorias criptográficamente seguros y rápidos. La primera parte de esta memoria realiza un análisis crítico acerca de la seguridad de los criptosistemas caóticos, llegando a la conclusión de que la gran mayoría de los algoritmos de cifrado caóticos continuos —ya sean realizados físicamente o programados numéricamente— tienen serios inconvenientes para proteger la confidencialidad de la información ya que son inseguros e ineficientes. Asimismo una gran parte de los criptosistemas caóticos discretos propuestos se consideran inseguros y otros no han sido atacados por lo que se considera necesario más trabajo de criptoanálisis. Esta parte concluye señalando las principales debilidades encontradas en los criptosistemas analizados y algunas recomendaciones para su mejora. En la segunda parte se diseña un método de criptoanálisis que permite la identificaci ón de los parámetros, que en general forman parte de la clave, de algoritmos de cifrado basados en sistemas caóticos de Lorenz y similares, que utilizan los esquemas de sincronización excitador-respuesta. Este método se basa en algunas características geométricas del atractor de Lorenz. El método diseñado se ha empleado para criptoanalizar eficientemente tres algoritmos de cifrado. Finalmente se realiza el criptoanálisis de otros dos esquemas de cifrado propuestos recientemente. La tercera parte de la tesis abarca el diseño de generadores de secuencias pseudoaleatorias criptográficamente seguras, basadas en aplicaciones caóticas, realizando las pruebas estadísticas, que corroboran las propiedades de aleatoriedad. Estos generadores pueden ser utilizados en el desarrollo de sistemas de cifrado en flujo y para cubrir las necesidades del cifrado en tiempo real. Una cuestión importante en el diseño de sistemas de cifrado discreto caótico es la degradación dinámica debida a la precisión finita; sin embargo, la mayoría de los diseñadores de sistemas de cifrado discreto caótico no ha considerado seriamente este aspecto. En esta tesis se hace hincapié en la importancia de esta cuestión y se contribuye a su esclarecimiento con algunas consideraciones iniciales. Ya que las cuestiones teóricas sobre la dinámica de la degradación de los sistemas caóticos digitales no ha sido totalmente resuelta, en este trabajo utilizamos algunas soluciones prácticas para evitar esta dificultad teórica. Entre las técnicas posibles, se proponen y evalúan varias soluciones, como operaciones de rotación de bits y desplazamiento de bits, que combinadas con la variación dinámica de parámetros y con la perturbación cruzada, proporcionan un excelente remedio al problema de la degradación dinámica. Además de los problemas de seguridad sobre la degradación dinámica, muchos criptosistemas se rompen debido a su diseño descuidado, no a causa de los defectos esenciales de los sistemas caóticos digitales. Este hecho se ha tomado en cuenta en esta tesis y se ha logrado el diseño de generadores pseudoaleatorios caóticos criptogr áficamente seguros.