949 resultados para Freezing of gait
Resumo:
Vast portions of Arctic and sub-Arctic Siberia, Alaska and the Yukon Territory are covered by ice-rich silty to sandy deposits that are containing large ice wedges, resulting from syngenetic sedimentation and freezing. Accompanied by wedge-ice growth in polygonal landscapes, the sedimentation process was driven by cold continental climatic and environmental conditions in unglaciated regions during the late Pleistocene, inducing the accumulation of the unique Yedoma deposits up to >50 meters thick. Because of fast incorporation of organic material into syngenetic permafrost during its formation, Yedoma deposits include well-preserved organic matter. Ice-rich deposits like Yedoma are especially prone to degradation triggered by climate changes or human activity. When Yedoma deposits degrade, large amounts of sequestered organic carbon as well as other nutrients are released and become part of active biogeochemical cycling. This could be of global significance for future climate warming as increased permafrost thaw is likely to lead to a positive feedback through enhanced greenhouse gas fluxes. Therefore, a detailed assessment of the current Yedoma deposit coverage and its volume is of importance to estimate its potential response to future climate changes. We synthesized the map of the coverage and thickness estimation, which will provide critical data needed for further research. In particular, this preliminary Yedoma map is a great step forward to understand the spatial heterogeneity of Yedoma deposits and its regional coverage. There will be further applications in the context of reconstructing paleo-environmental dynamics and past ecosystems like the mammoth-steppe-tundra, or ground ice distribution including future thermokarst vulnerability. Moreover, the map will be a crucial improvement of the data basis needed to refine the present-day Yedoma permafrost organic carbon inventory, which is assumed to be between 83±12 (Strauss et al., 2013, doi:10.1002/2013GL058088) and 129±30 (Walter Anthony et al., 2014, doi:10.1038/nature13560) gigatonnes (Gt) of organic carbon in perennially-frozen archives. Hence, here we synthesize data on the circum-Arctic and sub-Arctic distribution and thickness of Yedoma for compiling a preliminary circum-polar Yedoma map. For compiling this map, we used (1) maps of the previous Yedoma coverage estimates, (2) included the digitized areas from Grosse et al. (2013) as well as extracted areas of potential Yedoma distribution from additional surface geological and Quaternary geological maps (1.: 1:500,000: Q-51-V,G; P-51-A,B; P-52-A,B; Q-52-V,G; P-52-V,G; Q-51-A,B; R-51-V,G; R-52-V,G; R-52-A,B; 2.: 1:1,000,000: P-50-51; P-52-53; P-58-59; Q-42-43; Q-44-45; Q-50-51; Q-52-53; Q-54-55; Q-56-57; Q-58-59; Q-60-1; R-(40)-42; R-43-(45); R-(45)-47; R-48-(50); R-51; R-53-(55); R-(55)-57; R-58-(60); S-44-46; S-47-49; S-50-52; S-53-55; 3.: 1:2,500,000: Quaternary map of the territory of Russian Federation, 4.: Alaska Permafrost Map). The digitalization was done using GIS techniques (ArcGIS) and vectorization of raster Images (Adobe Photoshop and Illustrator). Data on Yedoma thickness are obtained from boreholes and exposures reported in the scientific literature. The map and database are still preliminary and will have to undergo a technical and scientific vetting and review process. In their current form, we included a range of attributes for Yedoma area polygons based on lithological and stratigraphical information from the original source maps as well as a confidence level for our classification of an area as Yedoma (3 stages: confirmed, likely, or uncertain). In its current version, our database includes more than 365 boreholes and exposures and more than 2000 digitized Yedoma areas. We expect that the database will continue to grow. In this preliminary stage, we estimate the Northern Hemisphere Yedoma deposit area to cover approximately 625,000 km². We estimate that 53% of the total Yedoma area today is located in the tundra zone, 47% in the taiga zone. Separated from west to east, 29% of the Yedoma area is found in North America and 71 % in North Asia. The latter include 9% in West Siberia, 11% in Central Siberia, 44% in East Siberia and 7% in Far East Russia. Adding the recent maximum Yedoma region (including all Yedoma uplands, thermokarst lakes and basins, and river valleys) of 1.4 million km² (Strauss et al., 2013, doi:10.1002/2013GL058088) and postulating that Yedoma occupied up to 80% of the adjacent formerly exposed and now flooded Beringia shelves (1.9 million km², down to 125 m below modern sea level, between 105°E - 128°W and >68°N), we assume that the Last Glacial Maximum Yedoma region likely covered more than 3 million km² of Beringia. Acknowledgements: This project is part of the Action Group "The Yedoma Region: A Synthesis of Circum-Arctic Distribution and Thickness" (funded by the International Permafrost Association (IPA) to J. Strauss) and is embedded into the Permafrost Carbon Network (working group Yedoma Carbon Stocks). We acknowledge the support by the European Research Council (Starting Grant #338335), the German Federal Ministry of Education and Research (Grant 01DM12011 and "CarboPerm" (03G0836A)), the Initiative and Networking Fund of the Helmholtz Association (#ERC-0013) and the German Federal Environment Agency (UBA, project UFOPLAN FKZ 3712 41 106).
Resumo:
Introduction: Chemical composition of water determines its physical properties and character of processes proceeding in it: freezing temperature, volume of evaporation, density, color, transparency, filtration capacity, etc. Presence of chemical elements in water solution confers waters special physical properties exerting significant influence on their circulation, creates necessary conditions for development and inhabitance of flora and fauna, and imparts to the ocean waters some chemical features that radically differ them from the land waters (Alekin & Liakhin, 1984). Hydrochemical information helps to determine elements of water circulation, convection depth, makes it easier to distinguish water masses and gives additional knowledge of climatic variability of ocean conditions. Hydrochemical information is a necessary part of biological research. Water chemical composition can be the governing characteristics determining possibility and limits of use of marine objects, both stationary and moving in sea water. Subject of investigation of hydrochemistry is study of dynamics of chemical composition, i.e. processes of its formation and hydrochemical conditions of water bodies (Alekin & Liakhin 1984). The hydrochemical processes in the Arctic Ocean are the least known. Some information on these processes can be obtained in odd publications. A generalizing study of hydrochemical conditions in the Arctic Ocean based on expeditions conducted in the years 1948-1975 has been carried out by Rusanov et al. (1979). The "Atlas of the World Ocean: the Arctic Ocean" contains a special section "Hydrochemistry" (Gorshkov, 1980). Typical vertical profiles, transects and maps for different depths - 0, 100, 300, 500, 1000, 2000, 3000 m are given in this section for the following parameters: dissolved oxygen, phosphate, silicate, pH and alkaline-chlorine coefficient. The maps were constructed using the data of expeditions conducted in the years 1948-1975. The illustrations reflect main features of distribution of the hydrochemical elements for multi-year period and represent a static image of hydrochemical conditions. Distribution of the hydrochemical elements on the ocean surface is given for two seasons - winter and summer, for the other depths are given mean annual fields. Aim of the present Atlas is description of hydrochemical conditions in the Arctic Ocean on the basis of a greater body of hydrochemical information for the years 1948-2000 and using the up-to-date methods of analysis and electronic forms of presentation of hydrochemical information. The most wide-spread characteristics determined in water samples were used as hydrochemical indices. They are: dissolved oxygen, phosphate, silicate, pH, total alkalinity, nitrite and nitrate. An important characteristics of water salt composition - "salinity" has been considered in the Oceanographic Atlas of the Arctic Ocean (1997, 1998). Presentation of the hydrochemical characteristics in this Hydrochemical Atlas is wider if compared with that of the former Atlas (Gorshkov, 1980). Maps of climatic distribution of the hydrochemical elements were constructed for all the standard depths, and seasonal variability of the hydrochemical parameters is given not only for the surface, but also for the underlying standard depths up to 400 m and including. Statistical characteristics of the hydrochemical elements are given for the first time. Detailed accuracy estimates of initial data and map construction are also given in the Atlas. Calculated values of mean-root deviations, maximum and minimum values of the parameters demonstrate limits of their variability for the analyzed period of observations. Therefore, not only investigations of chemical statics are summarized in the Atlas, but also some elements of chemical dynamics are demonstrated. Digital arrays of the hydrochemical elements obtained in nodes of a regular grid are the new form of characteristics presentation in the Atlas. It should be mentioned that the same grid and the same boxes were used in the Atlas, as those that had been used by creation of the US-Russian climatic Oceanographic Atlas. It allows to combine hydrochemical and oceanographic information of these Atlases. The first block of the digital arrays contains climatic characteristics calculated using direct observational data. These climatic characteristics were not calculated in the regions without observations, and the information arrays for these regions have gaps. The other block of climatic information in a gridded form was obtained with the help of objective analysis of observational data. Procedure of the objective analysis allowed us to obtain climatic estimates of the hydrochemical characteristics for the whole water area of the Arctic Ocean including the regions not covered by observations. Data of the objective analysis can be widely used, in particular, in hydrobiological investigations and in modeling of hydrochemical conditions of the Arctic Ocean. Array of initial measurements is a separate block. It includes all the available materials of hydrochemical observations in the form, as they were presented in different sources. While keeping in mind that this array contains some amount of perverted information, the authors of the Atlas assumed it necessary to store this information in its primary form. Methods of data quality control can be developed in future in the process of hydrochemical information accumulation. It can be supposed that attitude can vary in future to the data that were rejected according to the procedure accepted in the Atlas. The hydrochemical Atlas of the Arctic Ocean is the first specialized and electronic generalization of hydrochemical observations in the Arctic Ocean and finishes the program of joint efforts of Russian and US specialists in preparation of a number of atlases for the Arctic. The published Oceanographic Atlas (1997, 1998), Atlas of Arctic Meteorology and Climate (2000), Ice Atlas of the Arctic Ocean prepared for publication and Hydrochemical Atlas of the Arctic Ocean represent a united series of fundamental generalizations of empirical knowledge of Arctic Ocean nature at climatic level. The Hydrochemical Atlas of the Arctic Ocean was elaborated in the result of joint efforts of the SRC of the RF AARI and IARC. Dr. Ye. Nikiforov was scientific supervisor of the Atlas, Dr. R. Colony was manager on behalf of the USA and Dr. L. Timokhov - on behalf of Russia.
Resumo:
In the Arctic the currently observed rising air temperature results in more frequent calving of icebergs. The latter are derived from tidewater glaciers. Arctic macrozoobenthic soft-sediment communities are considerably disturbed by direct hits and sediment reallocation caused by iceberg scouring. With the aim to describe the primary succession of macrozoobenthic communities following these events, scientific divers installed 28 terracotta containers in the soft-sediment off Brandal (Kongsfjorden, Svalbard, Norway) at 20 m water depth in 2002. The containers were filled with a bentonite-sand-mixture resembling the natural sediment. Samples were taken annually between 2003 and 2007. A shift from pioneering species (e.g. Cumacea: Lamprops fuscatus) towards more specialized taxa, as well as from surface-detritivores towards subsurface-detritivores was observed. This is typical for an ecological succession following the facilitation and inhibition succession model. Similarity between experimental and non-manipulated communities from 2003 was significantly highest after three years of succession. In the following years similarity decreased, probably due to elevated temperatures, which prevented the fjord-system from freezing. Some organisms numerically important in the non-manipulated community (e.g., the polychaete Dipolydora quadrilobata) did not colonies the substrate during the experiment. This suggests that the community had not fully matured within the first three years. Later, the settlement was probably impeded by consequences of warming temperatures. This demonstrates the long-lasting effects of severe disturbances on Arctic macrozoobenthic communities. Furthermore, environmental changes, such as rising temperatures coupled with enhanced food availability due to an increasing frequency of ice-free days per year, may have a stronger effect on succession than exposure time.
Resumo:
Some experiments have been performed to investigate the cyclic freeze-thaw deterioration of concrete, using traditional and non-traditional techniques. Two concrete mixes, with different pore structure, were tested in order to compare the behavior of a freeze-thaw resistant concrete from one that is not. One of the concretes was air entrained, high content of cement and low w/c ratio, and the other one was a lower cement content and higher w/c ratio, without air-entraining agent. Concrete specimens were studied under cyclic freeze-thaw conditions according to UNE-CENT/TS 12390-9 test, using 3% NaCl solution as freezing medium (CDF test: Capillary Suction, De-icing agent and Freeze-thaw Test). The temperature and relative humidity were measured during the cycles inside the specimens using embedded sensors placed at different heights from the surface in contact with the de-icing agent solution. Strain gauges were used to measure the strain variations at the surface of the specimens. Also, measurements of ultrasonic pulse velocity through the concrete specimens were taken before, during, and after the freeze-thaw cycles. According to the CDF test, the failure of the non-air-entraining agent concrete was observed before 28 freeze-thaw cycles; contrariwise, the scaling of the air-entraining agent concrete was only 0.10 kg/m 2 after 28 cycles, versus 3.23 kg/m 2 in the deteriorated concrete, after 28 cycles. Similar behavior was observed on the strain measurements. The residual strain in the deteriorated concrete after 28 cycles was 1150 m versus 65 m, in the air-entraining agent concrete. By means of monitoring the changes of ultrasonic pulse velocity during the freeze-thaw cycles, the deterioration of the tested specimens were assessed
Resumo:
By analysing the dynamic principles of the human gait, an economic gait‐control analysis is performed, and passive elements are included to increase the energy efficiency in the motion control of active orthoses. Traditional orthoses use position patterns from the clinical gait analyses (CGAs) of healthy people, which are then de‐normalized and adjusted to each user. These orthoses maintain a very rigid gait, and their energy cosT is very high, reducing the autonomy of the user. First, to take advantage of the inherent dynamics of the legs, a state machine pattern with different gains in eachstate is applied to reduce the actuator energy consumption. Next, different passive elements, such as springs and brakes in the joints, are analysed to further reduce energy consumption. After an off‐line parameter optimization and a heuristic improvement with genetic algorithms, a reduction in energy consumption of 16.8% is obtained by applying a state machine control pattern, and a reduction of 18.9% is obtained by using passive elements. Finally, by combining both strategies, a more natural gait is obtained, and energy consumption is reduced by 24.6%compared with a pure CGA pattern.
Resumo:
Developing products having a high nutritional value and good storage stability during freezing is a challenge. Inulin (I) and extra virgin olive oil (EVOO) have interesting functional properties. The e?ect of the addition of I and EVOO blends at di?erent I:EVOO ratios (0:0, 0:60, 15:45, 30:30, 45:15, 60:0, 30:45 and 45:30) on the rheological, physical, sensory and structural properties of fresh and frozen ? thawed mashed potatoes formulated without and with added cryoprotectants was analysed and compared. Addition of I and EVOO (either alone or blended) reduced apparent viscosity and pseudoplasticity producing softer systems, indicating that both ingredients behave as soft ?llers. Samples with added I at the higher concentrations )1 (?45 g kg ) showed lower ?ow index and consistency, which is related to formation of smaller I particles; microphotographs indicated that gelling properties of I depended mostly upon processing. Frozen ? thawed samples were judged more acceptable and creamier than their fresh counterparts.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.
Resumo:
The use of biofuels in the aviation sector has economic and environmental benefits. Among the options for the production of renewable jet fuels, hydroprocessed esters and fatty acids (HEFA) have received predominant attention in comparison with fatty acid methyl esters (FAME), which are not approved as additives for jet fuels. However, the presence of oxygen in methyl esters tends to reduce soot emissions and therefore particulate matter emissions. This sooting tendency is quantified in this work with an oxygen-extended sooting index, based on smoke point measurements. Results have shown considerable reduction in the sooting tendency for all biokerosenes (produced by transesterification and eventually distillation) with respect to fossil kerosenes. Among the tested biokerosenes, that made from palm kernel oil was the most effective one, and nondistilled methyl esters (from camelina and linseed oils) showed lower effectiveness than distilled biokerosenes to reduce the sooting tendency. These results may constitute an additional argument for the use of FAME’s as blend components of jet fuels. Other arguments were pointed out in previous publications, but some controversy has aroused over the use of these components. Some of the criticism was based on the fact that the methods used in our previous work are not approved for jet fuels in the standard methods and concluded that the use of FAME in any amount is, thus, inappropriate. However, some of the standard methods are not updated for considering oxygenated components (like the method for obtaining the lower heating value), and others are not precise enough (like the methods for measuring the freezing point), whereas some alternative methods may provide better reproducibility for oxygenated fuels.
Resumo:
Due to the intensive use of mobile phones for diferent purposes, these devices usually contain condential information which must not be accessed by another person apart from the owner of the device. Furthermore, the new generation phones commonly incorporate an accelerometer which may be used to capture the acceleration signals produced as a result of owner s gait. Nowadays, gait identication in basis of acceleration signals is being considered as a new biometric technique which allows blocking the device when another person is carrying it. Although distance based approaches as Euclidean distance or dynamic time warping have been applied to solve this identication problem, they show di±culties when dealing with gaits at diferent speeds. For this reason, in this paper, a method to extract an average template from instances of the gait at diferent velocities is presented. This method has been tested with the gait signals of 34 subjects while walking at diferent motion speeds (slow, normal and fast) and it has shown to improve the performance of Euclidean distance and classical dynamic time warping.
Resumo:
El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.
Resumo:
Members of the Eph family of tyrosine kinase receptors have been implicated in the regulation of developmental processes and, in particular, axon guidance in the developing nervous system. The function of the EphA4 (Sek1) receptor was explored through creation of a null mutant mouse. Mice with a null mutation in the EphA4 gene are viable and fertile but have a gross motor dysfunction, which is evidenced by a loss of coordination of limb movement and a resultant hopping, kangaroo-like gait. Consistent with the observed phenotype, anatomical studies and anterograde tracing experiments reveal major disruptions of the corticospinal tract within the medulla and spinal cord in the null mutant animals. These results demonstrate a critical role for EphA4 in establishing the corticospinal projection.
Resumo:
We have used Mössbauer and electron paramagnetic resonance (EPR) spectroscopy to study a heme-N-alkylated derivative of chloroperoxidase (CPO) prepared by mechanism-based inactivation with allylbenzene and hydrogen peroxide. The freshly prepared inactivated enzyme (“green CPO”) displayed a nearly pure low-spin ferric EPR signal with g = 1.94, 2.15, 2.31. The Mössbauer spectrum of the same species recorded at 4.2 K showed magnetic hyperfine splittings, which could be simulated in terms of a spin Hamiltonian with a complete set of hyperfine parameters in the slow spin fluctuation limit. The EPR spectrum of green CPO was simulated using a three-term crystal field model including g-strain. The best-fit parameters implied a very strong octahedral field in which the three 2T2 levels of the (3d)5 configuration in green CPO were lowest in energy, followed by a quartet. In native CPO, the 6A1 states follow the 2T2 ground state doublet. The alkene-mediated inactivation of CPO is spontaneously reversible. Warming of a sample of green CPO to 22°C for increasing times before freezing revealed slow conversion of the novel EPR species to two further spin S = ½ ferric species. One of these species displayed g = 1.82, 2.25, 2.60 indistinguishable from native CPO. By subtracting spectral components due to native and green CPO, a third species with g = 1.86, 2.24, 2.50 could be generated. The EPR spectrum of this “quasi-native CPO,” which appears at intermediate times during the reactivation, was simulated using best-fit parameters similar to those used for native CPO.
Resumo:
Evidence that lesions of the basolateral amygdala complex (BLC) impair memory for fear conditioning in rats, measured by lack of “freezing” behavior in the presence of cues previously paired with footshocks, has suggested that the BLC may be a critical locus for the memory of fear conditioning. However, evidence that BLC lesions may impair unlearned as well as conditioned freezing makes it difficult to interpret the findings of studies assessing conditioned fear with freezing. The present study investigated whether such lesions prevent the expression of several measures of memory for contextual fear conditioning in addition to freezing. On day 1, rats with sham lesions or BLC lesions explored a Y maze. The BLC-lesioned rats (BLC rats) displayed a greater exploratory activity. On day 2, each of the rats was placed in the “shock” arm of the maze, and all of the sham and half of the BLC rats received footshocks. A 24-hr retention test assessed the freezing, time spent per arm, entries per arm, and initial entry into the shock arm. As previously reported, shocked BLC rats displayed little freezing. However, the other measures indicated that the shocked BLC rats remembered the fear conditioning. They entered less readily and less often and spent less time in the shock arm than did the control nonshocked BLC rats. Compared with the sham rats, the shocked BLC rats entered more quickly and more often and spent more time in the shock arm. These findings indicate that an intact BLC is not essential for the formation and expression of long-term cognitive/explicit memory of contextual fear conditioning.
Resumo:
In an attempt to improve behavioral memory, we devised a strategy to amplify the signal-to-noise ratio of the cAMP pathway, which plays a central role in hippocampal synaptic plasticity and behavioral memory. Multiple high-frequency trains of electrical stimulation induce long-lasting long-term potentiation, a form of synaptic strengthening in hippocampus that is greater in both magnitude and persistence than the short-lasting long-term potentiation generated by a single tetanic train. Studies using pharmacological inhibitors and genetic manipulations have shown that this difference in response depends on the activity of cAMP-dependent protein kinase A. Genetic studies have also indicated that protein kinase A and one of its target transcription factors, cAMP response element binding protein, are important in memory in vivo. These findings suggested that amplification of signals through the cAMP pathway might lower the threshold for generating long-lasting long-term potentiation and increase behavioral memory. We therefore examined the biochemical, physiological, and behavioral effects in mice of partial inhibition of a hippocampal cAMP phosphodiesterase. Concentrations of a type IV-specific phosphodiesterase inhibitor, rolipram, which had no significant effect on basal cAMP concentration, increased the cAMP response of hippocampal slices to stimulation with forskolin and induced persistent long-term potentiation in CA1 after a single tetanic train. In both young and aged mice, rolipram treatment before training increased long- but not short-term retention in freezing to context, a hippocampus-dependent memory task.
Resumo:
Mutation of the reeler gene (Reln) disrupts neuronal migration in several brain regions and gives rise to functional deficits such as ataxic gait and trembling in the reeler mutant mouse. Thus, the Reln product, reelin, is thought to control cell–cell interactions critical for cell positioning in the brain. Although an abundance of reelin transcript is found in the embryonic spinal cord [Ikeda, Y. & Terashima, T. (1997) Dev. Dyn. 210, 157–172; Schiffmann, S. N., Bernier, B. & Goffinet, A. M. (1997) Eur. J. Neurosci. 9, 1055–1071], it is generally thought that neuronal migration in the spinal cord is not affected by reelin. Here, however, we show that migration of sympathetic preganglionic neurons in the spinal cord is affected by reelin. This study thus indicates that reelin affects neuronal migration outside of the brain. Moreover, the relationship between reelin and migrating preganglionic neurons suggests that reelin acts as a barrier to neuronal migration.