918 resultados para Dynamic security analysis
Resumo:
The extensive impact and consequences of the 2010 Deep Water Horizon oil drilling rig failure in the Gulf of Mexico, together with expanding drilling activities in the Cuban Exclusive Economic zone, have cast a spotlight on Cuban oil development. The threat of a drilling rig failure has evolved from being only hypothetical to a potential reality with the commencement of active drilling in Cuban waters. The disastrous consequences of a drilling rig failure in Cuban waters will spread over a number of vital interests of the US and of nations in the Caribbean in the general environs of Cuba. The US fishing and tourist industries will take major blows from a significant oil spill in Cuban waters. Substantial ecological damage and damage to beaches could occur for the US, Mexico, Haiti and other countries as well. The need exists for the US to have the ability to independently monitor the reality of Cuban oceanic oil development. The advantages of having an independent US early warning system providing essential real-time data on the possible failure of a drilling rig in Cuban waters are numerous. An ideal early warning system would timely inform the US that an event has occurred or is likely to occur in, essentially, real-time. Presently operating monitoring systems that could provide early warning information are satellite-based. Such systems can indicate the locations of both drilling rigs and operational drilling platforms. The system discussed/proposed in this paper relies upon low-frequency underwater sound. The proposed system can complement existing monitoring systems, which offer ocean-surface information, by providing sub-ocean surface, near-real time, information. This “integrated system” utilizes and combines (integrates) many different forms of information, some gathered through sub-ocean surface systems, and some through electromagnetic-based remote sensing (satellites, aircraft, unmanned arial vehicles), and other methods as well. Although the proposed integrated system is in the developmental stage, it is based upon well-established technologies.
Resumo:
Indigenous movements have become increasingly powerful in the last couple of decades and they are now important political actors in some South American countries, such as Bolivia, Ecuador, and, to a lesser extent, Peru and Chile. The rise of indigenous movements has provoked concern among U.S. policymakers and other observers who have feared that these movements will exacerbate ethnic polarization, undermine democracy, and jeopardize U.S. interests in the region. This paper argues that concern over the rise of indigenous movements is greatly exaggerated. It maintains that the rise of indigenous movements has not brought about a market increase in ethnic polarization in the region because most indigenous organizations have been ethnically inclusive and have eschewed violence. Although the indigenous movements have at times demonstrated a lack of regard for democratic institutions and procedures, they have also helped deepen democracy in the Andean region by promoting greater political inclusion and participation and by aggressively combating ethnic discrimination and inequality. Finally, this study suggests that the indigenous population has opposed some U.S. –sponsored initiatives, such as coca eradication and market reform, for legitimate reasons. Such policies have had some negative environmental, cultural, and economic consequences for indigenous people, which U.S. policymakers should try to address. The conclusion provides some specific policy recommendations on how to go about this.
Resumo:
In 2009, South American military spending reached a total of $51.8 billion, a fifty percent increased from 2000 expenditures. The five-year moving average of arms transfers to South America was 150 percent higher from 2005 to 2009 than figures for 2000 to 2004.[1] These figures and others have led some observers to conclude that Latin America is engaged in an arms race. Other reasons, however, account for Latin America’s large military expenditure. Among them: Several countries have undertaken long-prolonged modernization efforts, recently made possible by six years of consistent regional growth.[2] A generational shift is at hand. Armed Forces are beginning to shed the stigma and association with past dictatorial regimes.[3] Countries are pursuing specific individual strategies, rather than reacting to purchases made by neighbors. For example, Brazil wants to attain greater control of its Amazon rainforests and offshore territories, Colombia’s spending demonstrates a response to internal threats, and Chile is continuing a modernization process begun in the 1990s.[4] Concerns remain, however: Venezuela continues to demonstrate poor democratic governance and a lack of transparency; neighbor-state relations between Colombia and Venezuela, Peru and Chile, and Bolivia and Paraguay, must all continue to be monitored; and Brazil’s military purchases, although legitimate, will likely result in a large accumulation of equipment.[5] These concerns can be best addressed by strengthening and garnering greater participation in transparent procurement mechanism.[6] The United States can do its part by supporting Latin American efforts to embrace the transparency process. _________________ [1] Bromley, Mark, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 12,13, and 16 [2] Robledo, Marcos, “The Rearmament Debate: A Chilean Perspective,” Power Point presentation, slide 18, 2010 Western Hemisphere Security Colloquium, Miami, Florida, May 25th-26th, 2010 [3] Yopo, Boris, “¿Carrera Armamentista en la Regiόn?” La Tercera, November 2nd, 2009, http://www.latercera.com/contenido/895_197084_9.shtml, accessed October 8th, 2010 [4] Walser, Ray, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 49,50,53 and 54 [5] Ibid., Guevara, Iñigo, Pg. 22 [6] Ibid., Bromley, Mark, Pgs. 18 and 19
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
Cette étude est destinée à la production et à la caractérisation des composites d’acide polylactique (PLA) et des fibres naturelles (lin, poudre de bois). Le moussage du PLA et ses composites ont également été étudiés afin d’évaluer les effets des conditions de moulage par injection et du renfort sur les propriétés finales de ces matériaux. Dans la première partie, les composites constitués de PLA et des fibres de lin ont été produits par extrusion suivit par un moulage en injection. L’effet de la variation du taux de charge (15, 25 et 40% en poids) sur les caractéristiques morphologique, mécanique, thermique et rhéologique des composites a été évalué. Dans la deuxième étape, la poudre de bois (WF) a été choisie pour renforcer le PLA. La préparation des composites de PLA et WF a été effectuée comme dans la première partie et une série complète de caractérisations morphologique, mécanique, thermique et l’analyse mécanique dynamique ont été effectués afin d’obtenir une évaluation complète de l’effet du taux de charge (15, 25 et 40% en poids) sur les propriétés du PLA. Finalement, la troisième partie de cette étude porte sur les composites de PLA et de renfort naturel afin de produire des composites moussés. Ces mousses ont été réalisées à l’aide d’un agent moussant exothermique (azodicarbonamide) via le moulage par injection, suite à un mélange du PLA et de fibres naturelles. Dans ce cas, la charge d’injection (quantité de matière injectée dans le moule: 31, 33, 36, 38 et 43% de la capacité de la presse à injection) et la concentration en poudre de bois (15, 25 et 40% en poids) ont été variées. La caractérisation des propriétés mécanique et thermique a été effectuée et les résultats ont démontré que les renforts naturels étudiés (lin et poudre de bois) permettaient d’améliorer les propriétés mécaniques des composites, notamment le module de flexion et la résistance au choc du polymère (PLA). En outre, la formation de la mousse était également efficace pour le PLA vierge et ses composites car les masses volumiques ont été significativement réduites.
Resumo:
La gestion intégrée de la ressource en eau implique de distinguer les parcours de l’eau qui sont accessibles aux sociétés de ceux qui ne le sont pas. Les cheminements de l’eau sont nombreux et fortement variables d’un lieu à l’autre. Il est possible de simplifier cette question en s’attardant plutôt aux deux destinations de l’eau. L’eau bleue forme les réserves et les flux dans l’hydrosystème : cours d’eau, nappes et écoulements souterrains. L’eau verte est le flux invisible de vapeur d’eau qui rejoint l’atmosphère. Elle inclut l’eau consommée par les plantes et l’eau dans les sols. Or, un grand nombre d’études ne portent que sur un seul type d’eau bleue, en ne s’intéressant généralement qu’au devenir des débits ou, plus rarement, à la recharge des nappes. Le portrait global est alors manquant. Dans un même temps, les changements climatiques viennent impacter ce cheminement de l’eau en faisant varier de manière distincte les différents composants de cycle hydrologique. L’étude réalisée ici utilise l’outil de modélisation SWAT afin de réaliser le suivi de toutes les composantes du cycle hydrologique et de quantifier l’impact des changements climatiques sur l’hydrosystème du bassin versant de la Garonne. Une première partie du travail a permis d’affiner la mise en place du modèle pour répondre au mieux à la problématique posée. Un soin particulier a été apporté à l’utilisation de données météorologiques sur grille (SAFRAN) ainsi qu’à la prise en compte de la neige sur les reliefs. Le calage des paramètres du modèle a été testé dans un contexte differential split sampling, en calant puis validant sur des années contrastées en terme climatique afin d’appréhender la robustesse de la simulation dans un contexte de changements climatiques. Cette étape a permis une amélioration substantielle des performances sur la période de calage (2000-2010) ainsi que la mise en évidence de la stabilité du modèle face aux changements climatiques. Par suite, des simulations sur une période d’un siècle (1960-2050) ont été produites puis analysées en deux phases : i) La période passée (1960-2000), basée sur les observations climatiques, a servi de période de validation à long terme du modèle sur la simulation des débits, avec de très bonnes performances. L’analyse des différents composants hydrologiques met en évidence un impact fort sur les flux et stocks d’eau verte, avec une diminution de la teneur en eau des sols et une augmentation importante de l’évapotranspiration. Les composantes de l’eau bleue sont principalement perturbées au niveau du stock de neige et des débits qui présentent tous les deux une baisse substantielle. ii) Des projections hydrologiques ont été réalisées (2010-2050) en sélectionnant une gamme de scénarios et de modèles climatiques issus d’une mise à l’échelle dynamique. L’analyse de simulation vient en bonne part confirmer les conclusions tirées de la période passée : un impact important sur l’eau verte, avec toujours une baisse de la teneur en eau des sols et une augmentation de l’évapotranspiration potentielle. Les simulations montrent que la teneur en eau des sols pendant la période estivale est telle qu’elle en vient à réduire les flux d’évapotranspiration réelle, mettant en évidence le possible déficit futur des stocks d’eau verte. En outre, si l’analyse des composantes de l’eau bleue montre toujours une diminution significative du stock de neige, les débits semblent cette fois en hausse pendant l’automne et l’hiver. Ces résultats sont un signe de l’«accélération» des composantes d’eau bleue de surface, probablement en relation avec l’augmentation des évènements extrêmes de précipitation. Ce travail a permis de réaliser une analyse des variations de la plupart des composantes du cycle hydrologique à l’échelle d’un bassin versant, confirmant l’importance de prendre en compte toutes ces composantes pour évaluer l’impact des changements climatiques et plus largement des changements environnementaux sur la ressource en eau.
Resumo:
A proposta deste artigo foi estudar a evolução do tamanho das cidades dos estados do nordeste do Brasil para os anos de 1990, 2000 e 2010 através da regularidade empírica conhecida como lei de Zipf, a qual pode ser representada por meio da distribuição de Pareto. Por meio da análise na dinâmica da distribuição das populações através do tempo, o crescimento urbano revelou uma persistência hierárquica das cidades de Salvador, Fortaleza e Recife, enquanto que São Luís experimentou o quarto lugar no rankinging das maiores cidades, que persistiu nas duas últimas décadas. A lei de Zipf não se verificou quando se considerou as cidades do Nordeste em conjunto, que pode ser devido ao menor grau de desenvolvimento urbano das cidades dessa região. Na análise dos estados em separado, também não se observou a lei de Zipf, embora tenha se verificado a lei de Gibrat, a qual postula que o crescimento das cidades é independente de seu tamanho. Por fim, acredita-se que a instalação do complexo minerometalúrgico do Maranhão tenha contribuído para o desenvolvimento e para a redução da desigualdade urbana intracidade nesta área.
Resumo:
A necessidade de produção de dispositivos eletrónicos mais eficientes e a sua miniaturização tem sido um dos principais desígnios da indústria eletrónica. Assim surgiu a necessidade de melhorar o desempenho das designadas placas de circuito impresso, tornando-as simultaneamente mais flexíveis, com menos ruído, mais estáveis face a variações bruscas de temperatura e que permitam operar numa vasta gama de frequências e potências. Para tal, uma das estratégias que tem vindo a ser estudada é a possibilidade de incorporar os componentes passivos, nomeadamente condensadores, sob a forma de filme diretamente no interior da placa. Por forma a manter uma elevada constante dielétrica e baixas perdas, mantendo a flexibilidade, associada ao polímero, têm sido desenvolvidos os designados compósitos de matriz polimérica. Nesta dissertação procedeu-se ao estudo do comportamento dielétrico e elétrico da mistura do cerâmico CaCu3Ti4O12 com o copolímero estireno-isoprenoestireno. Foram preparados filmes com diferentes concentrações de CCTO, recorrendo ao método de arrastamento, em conjunto com o Centro de Polímeros da Eslováquia. Foram também preparados filmes por spin-coating para as mesmas concentrações. Usaram-se dois métodos distintos para a preparação do pó de CCTO, reação de estado sólido e sol-gel. Foi realizada a caraterização estrutural (difração de raios-X. espetroscopia de Raman), morfológica (microscopia eletrónica de varrimento) e dielétrica aos filmes produzidos. Na caracterização dielétrica determinou-se o valor da constante dielétrica e das perdas para todos os filmes, à temperatura ambiente, bem como na gama de temperatura entre os 200 K e os 400 K, o que permitiu identificar existência de relaxações vítreas e subvítreas, e assim calcular as temperaturas de transição vítrea e energias de ativação, respetivamente. Foram realizados testes de adesão e aplicada a técnica de análise mecânica dinâmica para o cálculo das temperaturas de transição vítrea nos filmes preparados pelo método de arrastamento. Estudou-se ainda qual a lei de mistura que melhor se ajusta ao comportamento dielétrico do nosso compósito. Verificou-se que é a lei de Looyenga generalizada a que melhor se ajusta à resposta dielétrica dos compósitos produzidos.
Resumo:
Abstract : Natural materials have received a full attention in many applications because they are degradable and derived directly from earth. In addition to these benefits, natural materials can be obtained from renewable resources such as plants (i.e. cellulosic fibers like flax, hemp, jute, and etc). Being cheap and light in weight, the cellulosic natural fiber is a good candidate for reinforcing bio-based polymer composites. However, the hydrophilic nature -resulted from the presence of hydroxyl groups in the structure of these fibers- restricts the application of these fibers in the polymeric matrices. This is because of weak interfacial adhesion, and difficulties in mixing due to poor wettability of the fibers within the matrices. Many attempts have been done to modify surface properties of natural fibers including physical, chemical, and physico-chemical treatments but on the one hand, these treatments are unable to cure the intrinsic defects of the surface of the fibers and on the other hand they cannot improve moisture, and alkali resistance of the fibers. However, the creation of a thin film on the fibers would achieve the mentioned objectives. This study aims firstly to functionalize the flax fibers by using selective oxidation of hydroxyl groups existed in cellulose structure to pave the way for better adhesion of subsequent amphiphilic TiO[subscript 2] thin films created by Sol-Gel technique. This method is capable of creating a very thin layer of metallic oxide on a substrate. In the next step, the effect of oxidation on the interfacial adhesion between the TiO[subscript 2] film and the fiber and thus on the physical and mechanical properties of the fiber was characterized. Eventually, the TiO[subscript 2] grafted fibers with and without oxidation were used to reinforce poly lactic acid (PLA). Tensile, impact, and short beam shear tests were performed to characterize the mechanical properties while Thermogravimetric analysis (TGA), Differential Scanning Calorimetry (DSC), Dynamic mechanical analysis (DMA), and moisture absorption were used to show the physical properties of the composites. Results showed a significant increase in physical and mechanical properties of flax fibers when the fibers were oxidized prior to TiO[subscript 2] grafting. Moreover, the TiO[subscript 2] grafted oxidized fiber caused significant changes when they were used as reinforcements in PLA. A higher interfacial strength and less amount of water absorption were obtained in comparison with the reference samples.
Resumo:
We report the suitability of an Einstein-Podolsky-Rosen entanglement source for Gaussian continuous-variable quantum key distribution at 1550 nm. Our source is based on a single continuous-wave squeezed vacuum mode combined with a vacuum mode at a balanced beam splitter. Extending a recent security proof, we characterize the source by quantifying the extractable length of a composable secure key from a finite number of samples under the assumption of collective attacks. We show that distances in the order of 10 km are achievable with this source for a reasonable sample size despite the fact that the entanglement was generated including a vacuum mode. Our security analysis applies to all states having an asymmetry in the field quadrature variances, including those generated by superposition of two squeezed modes with different squeezing strengths.
Resumo:
The use of raw materials from renewable sources for production of materials has been the subject of several studies and researches, because of its potential to substitute petrochemical-based materials. The addition of natural fibers to polymers represents an alternative in the partial or total replacement of glass fibers in composites. In this work, carnauba leaf fibers were used in the production of biodegradable composites with polyhydroxybutyrate (PHB) matrix. To improve the interfacial properties fiber / matrix were studied four chemical treatments to the fibers..The effect of the different chemical treatments on the morphological, physical, chemical and mechanical properties of the fibers and composites were investigated by scanning electron microscopy (SEM), infrared spectroscopy, X-ray diffraction, tensile and flexural tests, dynamic mechanical analysis (DMA), thermogravimetry (TGA) and diferential scanning calorimetry (DSC). The results of tensile tests indicated an increase in tensile strength of the composites after the chemical treatment of the fibers, with best results for the hydrogen peroxide treated fibers, even though the tensile strength of fibers was slightly reduced. This suggests a better interaction fiber/matrix which was also observed by SEM fractographs. The glass transition temperature (Tg) was reduced for all composites compared to the pure polymer which can be attributed to the absorption of solvents, moisture and other low molecular weight molecules by the fibers
Resumo:
Travail dirigé présenté à la Faculté des études supérieures en vue de l’obtention du grade de maître ès sciences (M.Sc.) en criminologie option sécurité intérieure