910 resultados para Discrete time pricing model
Resumo:
Groundwater age is a key aspect of production well vulnerability. Public drinking water supply wells typically have long screens and are expected to produce a mixture of groundwater ages. The groundwater age distributions of seven production wells of the Holten well field (Netherlands) were estimated from tritium-helium (3H/3He), krypton-85 (85Kr), and argon-39 (39Ar), using a new application of a discrete age distribution model and existing mathematical models, by minimizing the uncertainty-weighted squared differences of modeled and measured tracer concentrations. The observed tracer concentrations fitted well to a 4-bin discrete age distribution model or a dispersion model with a fraction of old groundwater. Our results show that more than 75 of the water pumped by four shallow production wells has a groundwater age of less than 20 years and these wells are very vulnerable to recent surface contamination. More than 50 of the water pumped by three deep production wells is older than 60 years. 3H/3He samples from short screened monitoring wells surrounding the well field constrained the age stratification in the aquifer. The discrepancy between the age stratification with depth and the groundwater age distribution of the production wells showed that the well field preferentially pumps from the shallow part of the aquifer. The discrete groundwater age distribution model appears to be a suitable approach in settings where the shape of the age distribution cannot be assumed to follow a simple mathematical model, such as a production well field where wells compete for capture area.
Resumo:
PURPOSE Based on a nation-wide database, this study analysed the influence of methotrexate (MTX), TNF inhibitors and a combination of the two on uveitis occurrence in JIA patients. METHODS Data from the National Paediatric Rheumatological Database in Germany were used in this study. Between 2002 and 2013, data from JIA patients were annually documented at the participating paediatric rheumatological sites. Patients with JIA disease duration of less than 12 months at initial documentation and ≥2 years of follow-up were included in this study. The impact of anti-inflammatory treatment on the occurrence of uveitis was evaluated by discrete-time survival analysis. RESULTS A total of 3,512 JIA patients (mean age 8.3±4.8 years, female 65.7%, ANA-positive 53.2%, mean age at arthritis onset 7.8±4.8 years) fulfilled the inclusion criteria. Mean total follow-up time was 3.6±2.4 years. Uveitis developed in a total of 180 patients (5.1%) within one year after arthritis onset. Uveitis onset after the first year was observed in another 251 patients (7.1%). DMARD treatment in the year before uveitis onset significantly reduced the risk for uveitis: MTX (HR 0.63, p=0.022), TNF inhibitors (HR 0.56, p<0.001) and a combination of the two (HR 0.10, p<0.001). Patients treated with MTX within the first year of JIA had an even a lower uveitis risk (HR 0.29, p<0.001). CONCLUSION The use of DMARDs in JIA patients significantly reduced the risk for uveitis onset. Early MTX use within the first year of disease and the combination of MTX with a TNF inhibitor had the highest protective effect. This article is protected by copyright. All rights reserved.
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate stochastic-frontier, panel-data models specifying a translog cost function. The specified model updates the cost frontier with new information as it becomes available over time. The model can identify frontier cost improvements, returns to scale, and cost inefficiencies over time. The results disagree with most previous research in that we find no evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, higher leverage associates with more efficiency.
Resumo:
Radiotherapy has been a method of choice in cancer treatment for a number of years. Mathematical modeling is an important tool in studying the survival behavior of any cell as well as its radiosensitivity. One particular cell under investigation is the normal T-cell, the radiosensitivity of which may be indicative to the patient's tolerance to radiation doses.^ The model derived is a compound branching process with a random initial population of T-cells that is assumed to have compound distribution. T-cells in any generation are assumed to double or die at random lengths of time. This population is assumed to undergo a random number of generations within a period of time. The model is then used to obtain an estimate for the survival probability of T-cells for the data under investigation. This estimate is derived iteratively by applying the likelihood principle. Further assessment of the validity of the model is performed by simulating a number of subjects under this model.^ This study shows that there is a great deal of variation in T-cells survival from one individual to another. These variations can be observed under normal conditions as well as under radiotherapy. The findings are in agreement with a recent study and show that genetic diversity plays a role in determining the survival of T-cells. ^
Resumo:
Results of 40Ar-39Ar Ar dating constrain the age of the submerged volcanic succession, part of the seaward-dipping reflector sequence of the Southeast Greenland volcanic rifted margin, recovered during Leg 163. At the 63ºN drilling transect, the fully normally magnetized volcanic units at Holes 989B (Unit 1) and 990A (Units 1 and 2) are dated at 57.1 ± 1.3 Ma and 55.6 ± 0.6 Ma, respectively. This correlates with a common magnetochron, C25n. The underlying, reversely magnetized lavas at Hole 990A (Units 3-13) yield an average age of 55.8 ± 0.7 Ma and may correlate with C25r. The argon data, however, are also consistent with eruption of the lavas at Site 990 during the very earliest portion of C24. If so, the normally polarized units have to be correlated to a cryptochron (e.g., C24r-11 at ~55.57 Ma). The lavas at Holes 989B and 990A have typical oceanic compositions, implying that final plate separation between Greenland and northwest Europe took place at ~56 Ma. The age for Hole 989B lava is younger than expected from the seismic interpretations, posing questions about the structural evolution of the margin. An age of 49.6 ± 0.2 Ma for the basaltic lava at Site 988 (~66ºN) points to the importance of postbreakup tholeiitic magmatism at the rifted margin. Together with results from Leg 152, a virtually complete time frame for ~12 m.y. of pre-, syn-, and postbreakup volcanism during rifted margin evolution in Southeast Greenland can now be assembled. This time frame includes continental type volcanism at ~61-60 Ma, synbreakup volcanism beginning at ~57 Ma, and postbreakup volcanism at ~49.6 Ma. These discrete time windows coincide with distinct periods of tholeiitic magmatism from the onshore East Greenland Tertiary Igneous Province and is consistent with discrete mantle-melting events triggered by plume arrival (~61-60 Ma) under central Greenland, continental breakup (~57-54 Ma), and passage of the plume axis beneath the East Greenland rifted margin after breakup (~50-49 Ma), respectively.
Resumo:
La mandioca (Manihot esculenta Crantz) constituye la cuarta fuente mundial de calorías en la alimentación humana, es apta para la nutrición animal y para extracción de biocombustible. Este artículo tiene como objetivo evaluar el comportamiento de un modelo de tiempo térmico (TT) para caracterizar las fases fenológicas (FF) de dos cultivares de mandioca en un ciclo de crecimiento de 280 días, cultivados bajo condiciones de campo en Corrientes, Argentina. Las observaciones se realizaron durante las campañas 2007/2008 y 2008/2009. El cálculo del TT se basó en el método residual, la temperatura base fue 16°C. Entre ambos cultivares se observaron diferencias en los grados-día (GD) acumulados para cumplir las FF de expansión de la primera (00-01) y novena hoja (00-02 H9), y en la de engrosamiento de raíces (00-04 ERR), esta última relacionada con el índice de área foliar (IAF). Para completar el ciclo de crecimiento los cultivares Palomita y Amarilla requirieron entre 2027 y 2096 GD, respectivamente. El patrón de crecimiento y desarrollo fenológico de los cultivares de mandioca basado en los GD acumulados, puede utilizarse para caracterizar el progreso del cultivo en el bioambiente de Corrientes.
Resumo:
We present a reconstruction of El Niño Southern Oscillation (ENSO) variability spanning the Medieval Climate Anomaly (MCA, A.D. 800-1300) and the Little Ice Age (LIA, A.D. 1500-1850). Changes in ENSO are estimated by comparing the spread and symmetry of d18O values of individual specimens of the thermocline-dwelling planktonic foraminifer Pulleniatina obliquiloculata extracted from discrete time horizons of a sediment core collected in the Sulawesi Sea, at the edge of the western tropical Pacific warm pool. The spread of individual d18O values is interpreted to be a measure of the strength of both phases of ENSO while the symmetry of the d18O distributions is used to evaluate the relative strength/frequency of El Niño and La Niña events. In contrast to previous studies, we use robust and resistant statistics to quantify the spread and symmetry of the d18O distributions; an approach motivated by the relatively small sample size and the presence of outliers. Furthermore, we use a pseudo-proxy approach to investigate the effects of the different paleo-environmental factors on the statistics of the d18O distributions, which could bias the paleo-ENSO reconstruction. We find no systematic difference in the magnitude/strength of ENSO during the Northern Hemisphere MCA or LIA. However, our results suggest that ENSO during the MCA was skewed toward stronger/more frequent La Niña than El Niño, an observation consistent with the medieval megadroughts documented from sites in western North America.
Resumo:
Coral reefs are characterized by enormous carbonate production of the organisms. It is known that rapid calcification is linked to photosynthesis under control of the carbonate equilibrium in seawater. We have established a model simulating the coexisting states of photosynthesis and calcification in order to examine the effects of photosynthesis and calcification on the carbonate system in seawater. Supposing that the rates of photosynthesis and calcification are proportional to concentrations of their inorganic carbon source, the model calculations indicate that three kinds of unique interactions of the organic and inorganic carbon productions are expected. These are photosynthetic enhancement of calcification, calcification which benefits photosynthesis and carbonate dissolution induced by respiration. The first effect appears when the photosynthetic rate is more than approximately 1.2 larger than that of calcification. This effect is caused by the increase of CO3 content and carbonate saturation degree in seawater. If photosynthesis use molecular carbon dioxide, the second effect occurs when the calcification rate is more than approximately 1.6 times larger than that of photosynthesis. Time series model experiments indicate that photosynthesis and calcification potentially enhance each other and that organic and inorganic carbon is produced more efficiently in the coexisting system than in the isolated reactions. These coexisting effects on production enhancement of photosynthesis and calcification are expected to appear not only in the internal pool of organisms but also in a reef environment which is isolated from the outer ocean during low tide. According to the measurements on the fringing type Shiraho Reef in the Ryukyu Islands, the diurnal change of water properties (pH, total alkalinity, total carbon dioxide and carbonate saturation degree) were conspicuous. This environment offers an appropriate condition for the appearance of these coexisting effects. The photosynthetic enhancement of calcification and the respiratory inducement of decalcification were observed during day-time and night-time slack-water periods, respectively. These coexisting effects, especially the photosynthetic enhancement of calcification, appear to play important roles for fluorishing coral reef communities.
Resumo:
El tema central de investigación en esta Tesis es el estudio del comportamientodinámico de una estructura mediante modelos que describen la distribución deenergía entre los componentes de la misma y la aplicación de estos modelos parala detección de daños incipientes.Los ensayos dinámicos son un modo de extraer información sobre las propiedadesde una estructura. Si tenemos un modelo de la estructura se podría ajustar éstepara que, con determinado grado de precisión, tenga la misma respuesta que elsistema real ensayado. Después de que se produjese un daño en la estructura,la respuesta al mismo ensayo variará en cierta medida; actualizando el modelo alas nuevas condiciones podemos detectar cambios en la configuración del modeloestructural que nos condujeran a la conclusión de que en la estructura se haproducido un daño.De este modo, la detección de un daño incipiente es posible si somos capacesde distinguir una pequeña variación en los parámetros que definen el modelo. Unrégimen muy apropiado para realizar este tipo de detección es a altas frecuencias,ya que la respuesta es muy dependiente de los pequeños detalles geométricos,dado que el tamaño característico en la estructura asociado a la respuesta esdirectamente proporcional a la velocidad de propagación de las ondas acústicas enel sólido, que para una estructura dada es inalterable, e inversamente proporcionala la frecuencia de la excitación. Al mismo tiempo, esta característica de la respuestaa altas frecuencias hace que un modelo de Elementos Finitos no sea aplicable enla práctica, debido al alto coste computacional.Un modelo ampliamente utilizado en el cálculo de la respuesta de estructurasa altas frecuencias en ingeniería es el SEA (Statistical Energy Analysis). El SEAaplica el balance energético a cada componente estructural, relacionando la energíade vibración de estos con la potencia disipada por cada uno de ellos y la potenciatransmitida entre ellos, cuya suma debe ser igual a la potencia inyectada a cadacomponente estructural. Esta relación es lineal y viene caracterizada por los factoresde pérdidas. Las magnitudes que intervienen en la respuesta se consideranpromediadas en la geometría, la frecuencia y el tiempo.Actualizar el modelo SEA a datos de ensayo es, por lo tanto, calcular losfactores de pérdidas que reproduzcan la respuesta obtenida en éste. Esta actualización,si se hace de manera directa, supone la resolución de un problema inversoque tiene la característica de estar mal condicionado. En la Tesis se propone actualizarel modelo SEA, no en término de los factores de pérdidas, sino en términos deparámetros estructurales que tienen sentido físico cuando se trata de la respuestaa altas frecuencias, como son los factores de disipación de cada componente, susdensidades modales y las rigideces características de los elementos de acoplamiento.Los factores de pérdidas se calculan como función de estos parámetros. Estaformulación es desarrollada de manera original en esta Tesis y principalmente sefunda en la hipótesis de alta densidad modal, es decir, que en la respuesta participanun gran número de modos de cada componente estructural.La teoría general del método SEA, establece que el modelo es válido bajounas hipótesis sobre la naturaleza de las excitaciones externas muy restrictivas,como que éstas deben ser de tipo ruido blanco local. Este tipo de carga es difícil dereproducir en condiciones de ensayo. En la Tesis mostramos con casos prácticos queesta restricción se puede relajar y, en particular, los resultados son suficientementebuenos cuando la estructura se somete a una carga armónica en escalón.Bajo estas aproximaciones se desarrolla un algoritmo de optimización por pasosque permite actualizar un modelo SEA a un ensayo transitorio cuando la carga esde tipo armónica en escalón. Este algoritmo actualiza el modelo no solamente parauna banda de frecuencia en particular sino para diversas bandas de frecuencia demanera simultánea, con el objetivo de plantear un problema mejor condicionado.Por último, se define un índice de daño que mide el cambio en la matriz depérdidas cuando se produce un daño estructural en una localización concreta deun componente. Se simula numéricamente la respuesta de una estructura formadapor vigas donde producimos un daño en la sección de una de ellas; como se tratade un cálculo a altas frecuencias, la simulación se hace mediante el Método delos Elementos Espectrales para lo que ha sido necesario desarrollar dentro de laTesis un elemento espectral de tipo viga dañada en una sección determinada. Losresultados obtenidos permiten localizar el componente estructural en que se haproducido el daño y la sección en que éste se encuentra con determinado grado deconfianza.AbstractThe main subject under research in this Thesis is the study of the dynamic behaviourof a structure using models that describe the energy distribution betweenthe components of the structure and the applicability of these models to incipientdamage detection.Dynamic tests are a way to extract information about the properties of astructure. If we have a model of the structure, it can be updated in order toreproduce the same response as in experimental tests, within a certain degree ofaccuracy. After damage occurs, the response will change to some extent; modelupdating to the new test conditions can help to detect changes in the structuralmodel leading to the conclusión that damage has occurred.In this way incipient damage detection is possible if we are able to detect srnallvariations in the model parameters. It turns out that the high frequency regimeis highly relevant for incipient damage detection, because the response is verysensitive to small structural geometric details. The characteristic length associatedwith the response is proportional to the propagation speed of acoustic waves insidethe solid, but inversely proportional to the excitation frequency. At the same time,this fact makes the application of a Finite Element Method impractical due to thehigh computational cost.A widely used model in engineering when dealing with the high frequencyresponse is SEA (Statistical Energy Analysis). SEA applies the energy balance toeach structural component, relating their vibrational energy with the dissipatedpower and the transmitted power between the different components; their summust be equal to the input power to each of them. This relationship is linear andcharacterized by loss factors. The magnitudes considered in the response shouldbe averaged in geometry, frequency and time.SEA model updating to test data is equivalent to calculating the loss factorsthat provide a better fit to the experimental response. This is formulated as an illconditionedinverse problem. In this Thesis a new updating algorithm is proposedfor the study of the high frequency response regime in terms of parameters withphysical meaning such as the internal dissipation factors, modal densities andcharacteristic coupling stiffness. The loss factors are then calculated from theseparameters. The approach is developed entirely in this Thesis and is mainlybased on a high modal density asumption, that is to say, a large number of modescontributes to the response.General SEA theory establishes the validity of the model under the asumptionof very restrictive external excitations. These should behave as a local white noise.This kind of excitation is difficult to reproduce in an experimental environment.In this Thesis we show that in practical cases this assumption can be relaxed, inparticular, results are good enough when the structure is excited with a harmonicstep function.Under these assumptions an optimization algorithm is developed for SEAmodel updating to a transient test when external loads are harmonic step functions.This algorithm considers the response not only in a single frequency band,but also for several of them simultaneously.A damage index is defined that measures the change in the loss factor matrixwhen a damage has occurred at a certain location in the structure. The structuresconsidered in this study are built with damaged beam elements; as we are dealingwith the high frequency response, the numerical simulation is implemented witha Spectral Element Method. It has therefore been necessary to develop a spectralbeam damaged element as well. The reported results show that damage detectionis possible with this algorithm, moreover, damage location is also possible withina certain degree of accuracy.
Resumo:
The extraordinary increase of new information technologies, the development of Internet, the electronic commerce, the e-government, mobile telephony and future cloud computing and storage, have provided great benefits in all areas of society. Besides these, there are new challenges for the protection of information, such as the loss of confidentiality and integrity of electronic documents. Cryptography plays a key role by providing the necessary tools to ensure the safety of these new media. It is imperative to intensify the research in this area, to meet the growing demand for new secure cryptographic techniques. The theory of chaotic nonlinear dynamical systems and the theory of cryptography give rise to the chaotic cryptography, which is the field of study of this thesis. The link between cryptography and chaotic systems is still subject of intense study. The combination of apparently stochastic behavior, the properties of sensitivity to initial conditions and parameters, ergodicity, mixing, and the fact that periodic points are dense, suggests that chaotic orbits resemble random sequences. This fact, and the ability to synchronize multiple chaotic systems, initially described by Pecora and Carroll, has generated an avalanche of research papers that relate cryptography and chaos. The chaotic cryptography addresses two fundamental design paradigms. In the first paradigm, chaotic cryptosystems are designed using continuous time, mainly based on chaotic synchronization techniques; they are implemented with analog circuits or by computer simulation. In the second paradigm, chaotic cryptosystems are constructed using discrete time and generally do not depend on chaos synchronization techniques. The contributions in this thesis involve three aspects about chaotic cryptography. The first one is a theoretical analysis of the geometric properties of some of the most employed chaotic attractors for the design of chaotic cryptosystems. The second one is the cryptanalysis of continuos chaotic cryptosystems and finally concludes with three new designs of cryptographically secure chaotic pseudorandom generators. The main accomplishments contained in this thesis are: v Development of a method for determining the parameters of some double scroll chaotic systems, including Lorenz system and Chua’s circuit. First, some geometrical characteristics of chaotic system have been used to reduce the search space of parameters. Next, a scheme based on the synchronization of chaotic systems was built. The geometric properties have been employed as matching criterion, to determine the values of the parameters with the desired accuracy. The method is not affected by a moderate amount of noise in the waveform. The proposed method has been applied to find security flaws in the continuous chaotic encryption systems. Based on previous results, the chaotic ciphers proposed by Wang and Bu and those proposed by Xu and Li are cryptanalyzed. We propose some solutions to improve the cryptosystems, although very limited because these systems are not suitable for use in cryptography. Development of a method for determining the parameters of the Lorenz system, when it is used in the design of two-channel cryptosystem. The method uses the geometric properties of the Lorenz system. The search space of parameters has been reduced. Next, the parameters have been accurately determined from the ciphertext. The method has been applied to cryptanalysis of an encryption scheme proposed by Jiang. In 2005, Gunay et al. proposed a chaotic encryption system based on a cellular neural network implementation of Chua’s circuit. This scheme has been cryptanalyzed. Some gaps in security design have been identified. Based on the theoretical results of digital chaotic systems and cryptanalysis of several chaotic ciphers recently proposed, a family of pseudorandom generators has been designed using finite precision. The design is based on the coupling of several piecewise linear chaotic maps. Based on the above results a new family of chaotic pseudorandom generators named Trident has been designed. These generators have been specially designed to meet the needs of real-time encryption of mobile technology. According to the above results, this thesis proposes another family of pseudorandom generators called Trifork. These generators are based on a combination of perturbed Lagged Fibonacci generators. This family of generators is cryptographically secure and suitable for use in real-time encryption. Detailed analysis shows that the proposed pseudorandom generator can provide fast encryption speed and a high level of security, at the same time. El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de Internet, el comercio electrónico, la administración electrónica, la telefonía móvil y la futura computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección de la información, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos electrónicos. La criptografía juega un papel fundamental aportando las herramientas necesarias para garantizar la seguridad de estos nuevos medios, pero es imperativo intensificar la investigación en este ámbito para dar respuesta a la demanda creciente de nuevas técnicas criptográficas seguras. La teoría de los sistemas dinámicos no lineales junto a la criptografía dan lugar a la ((criptografía caótica)), que es el campo de estudio de esta tesis. El vínculo entre la criptografía y los sistemas caóticos continúa siendo objeto de un intenso estudio. La combinación del comportamiento aparentemente estocástico, las propiedades de sensibilidad a las condiciones iniciales y a los parámetros, la ergodicidad, la mezcla, y que los puntos periódicos sean densos asemejan las órbitas caóticas a secuencias aleatorias, lo que supone su potencial utilización en el enmascaramiento de mensajes. Este hecho, junto a la posibilidad de sincronizar varios sistemas caóticos descrita inicialmente en los trabajos de Pecora y Carroll, ha generado una avalancha de trabajos de investigación donde se plantean muchas ideas sobre la forma de realizar sistemas de comunicaciones seguros, relacionando así la criptografía y el caos. La criptografía caótica aborda dos paradigmas de diseño fundamentales. En el primero, los criptosistemas caóticos se diseñan utilizando circuitos analógicos, principalmente basados en las técnicas de sincronización caótica; en el segundo, los criptosistemas caóticos se construyen en circuitos discretos u ordenadores, y generalmente no dependen de las técnicas de sincronización del caos. Nuestra contribución en esta tesis implica tres aspectos sobre el cifrado caótico. En primer lugar, se realiza un análisis teórico de las propiedades geométricas de algunos de los sistemas caóticos más empleados en el diseño de criptosistemas caóticos vii continuos; en segundo lugar, se realiza el criptoanálisis de cifrados caóticos continuos basados en el análisis anterior; y, finalmente, se realizan tres nuevas propuestas de diseño de generadores de secuencias pseudoaleatorias criptográficamente seguros y rápidos. La primera parte de esta memoria realiza un análisis crítico acerca de la seguridad de los criptosistemas caóticos, llegando a la conclusión de que la gran mayoría de los algoritmos de cifrado caóticos continuos —ya sean realizados físicamente o programados numéricamente— tienen serios inconvenientes para proteger la confidencialidad de la información ya que son inseguros e ineficientes. Asimismo una gran parte de los criptosistemas caóticos discretos propuestos se consideran inseguros y otros no han sido atacados por lo que se considera necesario más trabajo de criptoanálisis. Esta parte concluye señalando las principales debilidades encontradas en los criptosistemas analizados y algunas recomendaciones para su mejora. En la segunda parte se diseña un método de criptoanálisis que permite la identificaci ón de los parámetros, que en general forman parte de la clave, de algoritmos de cifrado basados en sistemas caóticos de Lorenz y similares, que utilizan los esquemas de sincronización excitador-respuesta. Este método se basa en algunas características geométricas del atractor de Lorenz. El método diseñado se ha empleado para criptoanalizar eficientemente tres algoritmos de cifrado. Finalmente se realiza el criptoanálisis de otros dos esquemas de cifrado propuestos recientemente. La tercera parte de la tesis abarca el diseño de generadores de secuencias pseudoaleatorias criptográficamente seguras, basadas en aplicaciones caóticas, realizando las pruebas estadísticas, que corroboran las propiedades de aleatoriedad. Estos generadores pueden ser utilizados en el desarrollo de sistemas de cifrado en flujo y para cubrir las necesidades del cifrado en tiempo real. Una cuestión importante en el diseño de sistemas de cifrado discreto caótico es la degradación dinámica debida a la precisión finita; sin embargo, la mayoría de los diseñadores de sistemas de cifrado discreto caótico no ha considerado seriamente este aspecto. En esta tesis se hace hincapié en la importancia de esta cuestión y se contribuye a su esclarecimiento con algunas consideraciones iniciales. Ya que las cuestiones teóricas sobre la dinámica de la degradación de los sistemas caóticos digitales no ha sido totalmente resuelta, en este trabajo utilizamos algunas soluciones prácticas para evitar esta dificultad teórica. Entre las técnicas posibles, se proponen y evalúan varias soluciones, como operaciones de rotación de bits y desplazamiento de bits, que combinadas con la variación dinámica de parámetros y con la perturbación cruzada, proporcionan un excelente remedio al problema de la degradación dinámica. Además de los problemas de seguridad sobre la degradación dinámica, muchos criptosistemas se rompen debido a su diseño descuidado, no a causa de los defectos esenciales de los sistemas caóticos digitales. Este hecho se ha tomado en cuenta en esta tesis y se ha logrado el diseño de generadores pseudoaleatorios caóticos criptogr áficamente seguros.
Resumo:
Mobile activity recognition focuses on inferring the current activities of a mobile user by leveraging the sensory data that is available on today’s smart phones. The state of the art in mobile activity recognition uses traditional classification learning techniques. Thus, the learning process typically involves: i) collection of labelled sensory data that is transferred and collated in a centralised repository; ii) model building where the classification model is trained and tested using the collected data; iii) a model deployment stage where the learnt model is deployed on-board a mobile device for identifying activities based on new sensory data. In this paper, we demonstrate the Mobile Activity Recognition System (MARS) where for the first time the model is built and continuously updated on-board the mobile device itself using data stream mining. The advantages of the on-board approach are that it allows model personalisation and increased privacy as the data is not sent to any external site. Furthermore, when the user or its activity profile changes MARS enables promptly adaptation. MARS has been implemented on the Android platform to demonstrate that it can achieve accurate mobile activity recognition. Moreover, we can show in practise that MARS quickly adapts to user profile changes while at the same time being scalable and efficient in terms of consumption of the device resources.
Resumo:
We report conditions on a switching signal that guarantee that solutions of a switched linear systems converge asymptotically to zero. These conditions are apply to continuous, discrete-time and hybrid switched linear systems, both those having stable subsystems and mixtures of stable and unstable subsystems.
Resumo:
Accuracy in the liquid hydrocarbons custody transfer is mandatory because it has a great economic impact. By far the most accurate meter is the positive displacement (PD) meter. Increasing such an accuracy may adversely affect the cost of the custody transfer, unless simple models are developed in order to lower the cost, which is the purpose of this work. PD meter consists of a fixed volume rotating chamber. For each turn a pulse is counted, hence, the measured volume is the number of pulses times the volume of the chamber. It does not coincide with the real volume, so corrections have to be made. All the corrections are grouped by a meter factor. Among corrections highlights the slippage flow. By solving the Navier-Stokes equations one can find an analytical expression for this flow. It is neither easy nor cheap to apply straightforward the slippage correction; therefore we have made a simple model where slippage is regarded as a single parameter with dimension of time. The model has been tested for several PD meters. In our careful experiments, the meter factor grows with temperature at a constant pace of 8?10?5?ºC?1. Be warned