17 resultados para Lacan, Jacques, 1901-1981 - Contributions in psychoanalysis
em Universidad Politécnica de Madrid
Resumo:
In the last two decades, there has been an important increase in research on speech technology in Spain, mainly due to a higher level of funding from European, Spanish and local institutions and also due to a growing interest in these technologies for developing new services and applications. This paper provides a review of the main areas of speech technology addressed by research groups in Spain, their main contributions in the recent years and the main focus of interest these days. This description is classified in five main areas: audio processing including speech, speaker characterization, speech and language processing, text to speech conversion and spoken language applications. This paper also introduces the Spanish Network of Speech Technologies (RTTH. Red Temática en Tecnologías del Habla) as the research network that includes almost all the researchers working in this area, presenting some figures, its objectives and its main activities developed in the last years.
Resumo:
In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions.
Resumo:
The popularity of MapReduce programming model has increased interest in the research community for its improvement. Among the other directions, the point of fault tolerance, concretely the failure detection issue seems to be a crucial one, but that until now has not reached its satisfying level. Motivated by this, I decided to devote my main research during this period into having a prototype system architecture of MapReduce framework with a new failure detection service, containing both analytical (theoretical) and implementation part. I am confident that this work should lead the way for further contributions in detecting failures to any NoSQL App frameworks, and cloud storage systems in general.
Resumo:
Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.
Resumo:
The extreme runup is a key parameter for a shore risk analysis in which the accurate and quantitative estimation of the upper limit reached by waves is essential. Runup can be better approximated by splitting the setup and swash semi-amplitude contributions. In an experimental study recording setup becomes difficult due to infragravity motions within the surf zone, hence, it would be desirable to measure the setup with available methodologies and devices. In this research, an analysis is made of evaluated the convenience of direct estimation setup as the medium level in the swash zone for experimental runup analysis through a physical model. A physical mobile bed model was setup in a wave flume at the Laboratory for Maritime Experimentation of CEDEX. The wave flume is 36 metres long, 6.5 metres wide and 1.3 metres high. The physical model was designed to cover a reasonable range of parameters, three different slopes (1/50, 1/30 and 1/20), two sand grain sizes (D50 = 0.12 mm and 0.70 mm) and a range for the Iribarren number in deep water (ξ0) from 0.1 to 0.6. Best formulations were chosen for estimating a theoretical setup in the physical model application. Once theoretical setup had been obtained, a comparison was made with an estimation of the setup directly as a medium level of the oscillation in swash usually considered in extreme runup analyses. A good correlation was noted between both theoretical and time-averaging setup and a relation is proposed. Extreme runup is analysed through the sum of setup and semi-amplitude of swash. An equation is proposed that could be applied in strong foreshore slope-dependent reflective beaches.
Resumo:
Reproducibility of published results is a cornerstone in scientific publishing and progress. Therefore, the scientific community has been encouraging authors and editors to publish their contributions in a verifiable and understandable way. Efforts such as the Reproducibility Initiative [1], or the Reproducibility Projects on Biology [2] and Psychology [3] domains, have been defining standards and patterns to assess whether an experimental result is reproducible.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
The extraordinary increase of new information technologies, the development of Internet, the electronic commerce, the e-government, mobile telephony and future cloud computing and storage, have provided great benefits in all areas of society. Besides these, there are new challenges for the protection of information, such as the loss of confidentiality and integrity of electronic documents. Cryptography plays a key role by providing the necessary tools to ensure the safety of these new media. It is imperative to intensify the research in this area, to meet the growing demand for new secure cryptographic techniques. The theory of chaotic nonlinear dynamical systems and the theory of cryptography give rise to the chaotic cryptography, which is the field of study of this thesis. The link between cryptography and chaotic systems is still subject of intense study. The combination of apparently stochastic behavior, the properties of sensitivity to initial conditions and parameters, ergodicity, mixing, and the fact that periodic points are dense, suggests that chaotic orbits resemble random sequences. This fact, and the ability to synchronize multiple chaotic systems, initially described by Pecora and Carroll, has generated an avalanche of research papers that relate cryptography and chaos. The chaotic cryptography addresses two fundamental design paradigms. In the first paradigm, chaotic cryptosystems are designed using continuous time, mainly based on chaotic synchronization techniques; they are implemented with analog circuits or by computer simulation. In the second paradigm, chaotic cryptosystems are constructed using discrete time and generally do not depend on chaos synchronization techniques. The contributions in this thesis involve three aspects about chaotic cryptography. The first one is a theoretical analysis of the geometric properties of some of the most employed chaotic attractors for the design of chaotic cryptosystems. The second one is the cryptanalysis of continuos chaotic cryptosystems and finally concludes with three new designs of cryptographically secure chaotic pseudorandom generators. The main accomplishments contained in this thesis are: v Development of a method for determining the parameters of some double scroll chaotic systems, including Lorenz system and Chua’s circuit. First, some geometrical characteristics of chaotic system have been used to reduce the search space of parameters. Next, a scheme based on the synchronization of chaotic systems was built. The geometric properties have been employed as matching criterion, to determine the values of the parameters with the desired accuracy. The method is not affected by a moderate amount of noise in the waveform. The proposed method has been applied to find security flaws in the continuous chaotic encryption systems. Based on previous results, the chaotic ciphers proposed by Wang and Bu and those proposed by Xu and Li are cryptanalyzed. We propose some solutions to improve the cryptosystems, although very limited because these systems are not suitable for use in cryptography. Development of a method for determining the parameters of the Lorenz system, when it is used in the design of two-channel cryptosystem. The method uses the geometric properties of the Lorenz system. The search space of parameters has been reduced. Next, the parameters have been accurately determined from the ciphertext. The method has been applied to cryptanalysis of an encryption scheme proposed by Jiang. In 2005, Gunay et al. proposed a chaotic encryption system based on a cellular neural network implementation of Chua’s circuit. This scheme has been cryptanalyzed. Some gaps in security design have been identified. Based on the theoretical results of digital chaotic systems and cryptanalysis of several chaotic ciphers recently proposed, a family of pseudorandom generators has been designed using finite precision. The design is based on the coupling of several piecewise linear chaotic maps. Based on the above results a new family of chaotic pseudorandom generators named Trident has been designed. These generators have been specially designed to meet the needs of real-time encryption of mobile technology. According to the above results, this thesis proposes another family of pseudorandom generators called Trifork. These generators are based on a combination of perturbed Lagged Fibonacci generators. This family of generators is cryptographically secure and suitable for use in real-time encryption. Detailed analysis shows that the proposed pseudorandom generator can provide fast encryption speed and a high level of security, at the same time. El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de Internet, el comercio electrónico, la administración electrónica, la telefonía móvil y la futura computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección de la información, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos electrónicos. La criptografía juega un papel fundamental aportando las herramientas necesarias para garantizar la seguridad de estos nuevos medios, pero es imperativo intensificar la investigación en este ámbito para dar respuesta a la demanda creciente de nuevas técnicas criptográficas seguras. La teoría de los sistemas dinámicos no lineales junto a la criptografía dan lugar a la ((criptografía caótica)), que es el campo de estudio de esta tesis. El vínculo entre la criptografía y los sistemas caóticos continúa siendo objeto de un intenso estudio. La combinación del comportamiento aparentemente estocástico, las propiedades de sensibilidad a las condiciones iniciales y a los parámetros, la ergodicidad, la mezcla, y que los puntos periódicos sean densos asemejan las órbitas caóticas a secuencias aleatorias, lo que supone su potencial utilización en el enmascaramiento de mensajes. Este hecho, junto a la posibilidad de sincronizar varios sistemas caóticos descrita inicialmente en los trabajos de Pecora y Carroll, ha generado una avalancha de trabajos de investigación donde se plantean muchas ideas sobre la forma de realizar sistemas de comunicaciones seguros, relacionando así la criptografía y el caos. La criptografía caótica aborda dos paradigmas de diseño fundamentales. En el primero, los criptosistemas caóticos se diseñan utilizando circuitos analógicos, principalmente basados en las técnicas de sincronización caótica; en el segundo, los criptosistemas caóticos se construyen en circuitos discretos u ordenadores, y generalmente no dependen de las técnicas de sincronización del caos. Nuestra contribución en esta tesis implica tres aspectos sobre el cifrado caótico. En primer lugar, se realiza un análisis teórico de las propiedades geométricas de algunos de los sistemas caóticos más empleados en el diseño de criptosistemas caóticos vii continuos; en segundo lugar, se realiza el criptoanálisis de cifrados caóticos continuos basados en el análisis anterior; y, finalmente, se realizan tres nuevas propuestas de diseño de generadores de secuencias pseudoaleatorias criptográficamente seguros y rápidos. La primera parte de esta memoria realiza un análisis crítico acerca de la seguridad de los criptosistemas caóticos, llegando a la conclusión de que la gran mayoría de los algoritmos de cifrado caóticos continuos —ya sean realizados físicamente o programados numéricamente— tienen serios inconvenientes para proteger la confidencialidad de la información ya que son inseguros e ineficientes. Asimismo una gran parte de los criptosistemas caóticos discretos propuestos se consideran inseguros y otros no han sido atacados por lo que se considera necesario más trabajo de criptoanálisis. Esta parte concluye señalando las principales debilidades encontradas en los criptosistemas analizados y algunas recomendaciones para su mejora. En la segunda parte se diseña un método de criptoanálisis que permite la identificaci ón de los parámetros, que en general forman parte de la clave, de algoritmos de cifrado basados en sistemas caóticos de Lorenz y similares, que utilizan los esquemas de sincronización excitador-respuesta. Este método se basa en algunas características geométricas del atractor de Lorenz. El método diseñado se ha empleado para criptoanalizar eficientemente tres algoritmos de cifrado. Finalmente se realiza el criptoanálisis de otros dos esquemas de cifrado propuestos recientemente. La tercera parte de la tesis abarca el diseño de generadores de secuencias pseudoaleatorias criptográficamente seguras, basadas en aplicaciones caóticas, realizando las pruebas estadísticas, que corroboran las propiedades de aleatoriedad. Estos generadores pueden ser utilizados en el desarrollo de sistemas de cifrado en flujo y para cubrir las necesidades del cifrado en tiempo real. Una cuestión importante en el diseño de sistemas de cifrado discreto caótico es la degradación dinámica debida a la precisión finita; sin embargo, la mayoría de los diseñadores de sistemas de cifrado discreto caótico no ha considerado seriamente este aspecto. En esta tesis se hace hincapié en la importancia de esta cuestión y se contribuye a su esclarecimiento con algunas consideraciones iniciales. Ya que las cuestiones teóricas sobre la dinámica de la degradación de los sistemas caóticos digitales no ha sido totalmente resuelta, en este trabajo utilizamos algunas soluciones prácticas para evitar esta dificultad teórica. Entre las técnicas posibles, se proponen y evalúan varias soluciones, como operaciones de rotación de bits y desplazamiento de bits, que combinadas con la variación dinámica de parámetros y con la perturbación cruzada, proporcionan un excelente remedio al problema de la degradación dinámica. Además de los problemas de seguridad sobre la degradación dinámica, muchos criptosistemas se rompen debido a su diseño descuidado, no a causa de los defectos esenciales de los sistemas caóticos digitales. Este hecho se ha tomado en cuenta en esta tesis y se ha logrado el diseño de generadores pseudoaleatorios caóticos criptogr áficamente seguros.
Resumo:
Kinetic Monte Carlo (KMC) is a widely used technique to simulate the evolution of radiation damage inside solids. Despite de fact that this technique was developed several decades ago, there is not an established and easy to access simulating tool for researchers interested in this field, unlike in the case of molecular dynamics or density functional theory calculations. In fact, scientists must develop their own tools or use unmaintained ones in order to perform these types of simulations. To fulfil this need, we have developed MMonCa, the Modular Monte Carlo simulator. MMonCa has been developed using professional C++ programming techniques and has been built on top of an interpreted language to allow having a powerful yet flexible, robust but customizable and easy to access modern simulator. Both non lattice and Lattice KMC modules have been developed. We will present in this conference, for the first time, the MMonCa simulator. Along with other (more detailed) contributions in this meeting, the versatility of MMonCa to study a number of problems in different materials (particularly, Fe and W) subject to a wide range of conditions will be shown. Regarding KMC simulations, we have studied neutron-generated cascade evolution in Fe (as a model material). Starting with a Frenkel pair distribution we have followed the defect evolution up to 450 K. Comparison with previous simulations and experiments shows excellent agreement. Furthermore, we have studied a more complex system (He-irradiated W:C) using a previous parametrization [1]. He-irradiation at 4 K followed by isochronal annealing steps up to 500 K has been simulated with MMonCa. The He energy was 400 eV or 3 keV. In the first case, no damage is associated to the He implantation, whereas in the second one, a significant Frenkel pair concentration (evolving into complex clusters) is associated to the He ions. We have been able to explain He desorption both in the absence and in the presence of Frenkel pairs and we have also applied MMonCa to high He doses and fluxes at elevated temperatures. He migration and trapping dominate the kinetics of He desorption. These processes will be discussed and compared to experimental results. [1] C.S. Becquart et al. J. Nucl. Mater. 403 (2010) 75
Resumo:
A recent study by the authors points to Charged Particle Drag (CPD) as a contributor to revisit in the LAGEOS non-gravitational perturbations problem. Such perturbations must account for dynamical contributions in the order of pms−2 . The simulated effect takes into account: (i) spatial and temporal variations of the plasmatic parameters (temperature and concentration of the species), (ii) spacecraft potential variations caused by both the eclipse passages and variations in the parameters mentioned above, and (iii) solar and geomagnetic conditions. Furthermore, recent theoretical improvements concerning scattering drag overcome previous limitations allowing for a complete formulation of this effect. For each satellite the lifetime CPD instantaneous acceleration is computed. The plasmatic parameters have been obtained fromthe Sheffield Coupled Thermosphere-Ionosphere-Plasmasphere (SCTIP) semi-empirical model (up to the polar region), as well as alytical/empirical approximations based on spacecraft measurements for the auroral and polar regions. Results show that maximum amplitudes for LAGEOSI are larger than those for LAGEOS-II: −85 pms−2 and −70 pms−2 respectively. This is due to the almost (magnetically) polar orbit configuration of the first, producing larger combinations of plasmatic parameter values. High solar activity has a huge impact in the resulting LAGEOS accelerations: it yields a perfect modulation of the resulting acceleration with maximum amplitudes up to a factor of 10 when comparing low and high activity periods. On the other hand, the impact of the geomagnetic activity results into a reduction of the effect itself, probably due to a decrease in the hydrogen concentration for high energy input periods. The acceleration results will be used in a refined orbit computation in a subsequent investigation.
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.
Resumo:
Ternary molybdates and tungstates ABO4 (A=Ca, Pb and B= Mo, W) are a group of materials that could be used for a variety of optoelectronic applications. We present a study of the optoelectronic properties based on first-principles using several orbitaldependent one-electron potentials applied to several orbital subspaces. The optical properties are split into chemical-species contributions in order to quantify the microscopic contributions. Furthermore, the effect of using several one-electron potentials and orbital subspaces is analyzed. From the results, the larger contribution to the optical absorption comes from the B-O transitions. The possible use as multi-gap solar cell absorbents is analyzed.
Resumo:
Ternary MCrO4 (M = Ba, Sr) semiconductors are materials with a variety of photocatalyst and optoelectronic applications. We present detailed microscopic analyses based on first principles of the structure, the electronic properties and the optical absorption in which the difference between symmetrically non-equivalent atoms has been considered. The high absorption coefficients of these materials are split into chemical species contributions in accordance with the symmetry. The high optical absorption in these materials is mainly because of the Cr–O inter-species transitions.
Resumo:
El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.
Resumo:
México es de los pocos países en el mundo que ha realizado dos grandes programas para la construcción de autopistas en colaboración con el sector privado. El primero, fue realizado entre 1989 y 1994, con resultados adversos por el mal diseño del esquema de concesiones; y, el segundo con mejores resultados, en operación desde 2003 mediante nuevos modelos de asociación público-privada (APP). El objetivo de la presente investigación es estudiar los modelos de asociación público-privada empleados en México para la provisión de infraestructura carretera, realizando el análisis y la evaluación de la distribución de riesgos entre el sector público y privado en cada uno de los modelos con el propósito de establecer una propuesta de reasignación de riesgos para disminuir el costo global y la incertidumbre de los proyectos. En la primera parte se describe el estado actual del conocimiento de las asociaciones público-privadas para desarrollar proyectos de infraestructura, incluyendo los antecedentes, la definición y las tipologías de los esquemas APP, así como la práctica internacional de programas como el modelo británico Private Finance Initiative (PFI), resultados de proyectos en la Unión Europea y programas APP en otros países. También, se destaca la participación del sector privado en el financiamiento de la infraestructura del transporte de México en la década de 1990. En los capítulos centrales se aborda el estudio de los modelos APP que se han utilizado en el país en la construcción de la red de carreteras de alta capacidad. Se presentan las características y los resultados del programa de autopistas 1989-94, así como el rescate financiero y las medidas de reestructuración de los proyectos concesionados, aspectos que obligaron a las autoridades mexicanas a cambiar la normatividad para la aprobación de los proyectos según su rentabilidad, modificar la legislación de caminos y diseñar nuevos esquemas de colaboración entre el gobierno y el sector privado. Los nuevos modelos APP vigentes desde 2003 son: nuevo modelo de concesiones para desarrollar autopistas de peaje, modelo de proyectos de prestación de servicios (peaje sombra) para modernizar carreteras existentes y modelo de aprovechamiento de activos para concesionar autopistas de peaje en operación a cambio de un pago. De estos modelos se realizaron estudios de caso en los que se determinan medidas de desempeño operativo (niveles de tráfico, costos y plazos de construcción) y rentabilidad financiera (tasa interna de retorno y valor presente neto). En la última parte se efectúa la identificación, análisis y evaluación de los riesgos que afectaron los costos, el tiempo de ejecución y la rentabilidad de los proyectos de ambos programas. Entre los factores de riesgo analizados se encontró que los más importantes fueron: las condiciones macroeconómicas del país (inflación, producto interno bruto, tipo de cambio y tasa de interés), deficiencias en la planificación de los proyectos (diseño, derecho de vía, tarifas, permisos y estimación del tránsito) y aportaciones públicas en forma de obra. Mexico is one of the few countries in the world that has developed two major programs for highway construction in collaboration with the private sector. The first one was carried out between 1989 and 1994 with adverse outcomes due to the wrong design of concession schemes; and, the second one, in operation since 2003, through new public-private partnership models (PPPs). The objective of this research is to study public-private partnership models used in Mexico for road infrastructure provision, performing the analysis and evaluation of risk’s distribution between the public and the private sector in each model in order to draw up a proposal for risk’s allocation to reduce the total cost and the uncertainty of projects. The first part describes the current state of knowledge in public-private partnership to develop infrastructure projects, including the history, definition and types of PPP models, as well as international practice of programs such as the British Private Finance Initiative (PFI) model, results in the European Union and PPP programs in other countries. Also, it stands out the private sector participation in financing of Mexico’s transport infrastructure in 1990s. The next chapters present the study of public-private partnerships models that have been used in the country in the construction of the high capacity road network. Characteristics and outcomes of the highway program 1989-94 are presented, as well as the financial bailout and restructuring measures of the concession projects, aspects that forced the Mexican authorities to change projects regulations, improve road’s legislation and design new schemes of cooperation between the Government and the private sector. The new PPP models since 2003 are: concession model to develop toll highways, private service contracts model (shadow toll) to modernize existing roads and highway assets model for the concession of toll roads in operation in exchange for a payment. These models were analyzed using case studies in which measures of operational performance (levels of traffic, costs and construction schedules) and financial profitability (internal rate of return and net present value) are determined. In the last part, the analysis and assessment of risks that affect costs, execution time and profitability of the projects are carried out, for both programs. Among the risk factors analyzed, the following ones were found to be the most important: country macroeconomic conditions (inflation, gross domestic product, exchange rate and interest rate), deficiencies in projects planning (design, right of way, tolls, permits and traffic estimation) and public contributions in the form of construction works.