952 resultados para Advanced characterization methods
Resumo:
La presente tesis revisa y analiza algunos aspectos fundamentales relativos al comportamiento de los sensores basados en resonadores piezoeléctricos TSM (Thickness Shear Mode), así como la aplicación de los mismos al estudio y caracterización de dos medios viscoelásticos de gran interés: los fluidos magnetoreológicos y los biofilms microbianos. El funcionamiento de estos sensores está basado en la medida de sus propiedades resonantes, las cuales varían al entrar en contacto con el material que se quiere analizar. Se ha realizado un análisis multifrecuencial, trabajando en varios modos de resonancia del transductor, en algunas aplicaciones incluso de forma simultánea (excitación pulsada). Se han revisado fenómenos como la presencia de microcontactos en la superficie del sensor y la resonancia de capas viscoelásticas de espesor finito, que pueden afectar a los sensores de cuarzo de manera contraria a lo que predice la teoría convencional (Sauerbrey y Kanazawa), pudiéndonos llevar a incrementos positivos de la frecuencia de resonancia. Además, se ha estudiado el efecto de una deposición no uniforme sobre el resonador piezoeléctrico. Para ello se han medido deposiciones de poliuretano, modelándose la respuesta del resonador con estas deposiciones mediante FEM. El modelo numérico permite estudiar el comportamiento del resonador al modificar distintas variables geométricas (espesor, superficie, no uniformidad y zona de deposición) de la capa depositada. Se ha demostrado que para espesores de entre un cuarto y media longitud de onda aproximadamente, una capa viscoelástica no uniforme sobre la superficie del sensor, amplifica el incremento positivo del desplazamiento de la frecuencia de resonancia en relación con una capa uniforme. Se ha analizado también el patrón geométrico de la sensibilidad del sensor, siendo también no uniforme sobre su superficie. Se han aplicado sensores TSM para estudiar los cambios viscoelásticos que se producen en varios fluidos magneto-reológicos (FMR) al aplicarles distintos esfuerzos de cizalla controlados por un reómetro. Se ha podido ver que existe una relación directa entre diversos parámetros reológicos obtenidos con el reómetro (fuerza normal, G’, G’’, velocidad de deformación, esfuerzo de cizalla…) y los parámetros acústicos, caracterizándose los FMR tanto en ausencia de campo magnético, como con campo magnético aplicado a distintas intensidades. Se han estudiado las ventajas que aporta esta técnica de medida sobre la técnica basada en un reómetro comercial, destacando que se consigue caracterizar con mayor detalle algunos aspectos relevantes del fluido como son la deposición de partículas (estabilidad del fluido), el proceso de ruptura de las estructuras formadas en los FMR tanto en presencia como en ausencia de campo magnético y la rigidez de los microcontactos que aparecen entre partículas y superficies. También se han utilizado sensores de cuarzo para monitorear en tiempo real la formación de biofilms de Staphylococcus epidermidis y Eschericia coli sobre los propios resonadores de cristal de cuarzo sin ningún tipo de recubrimiento, realizándose ensayos con cepas que presentan distinta capacidad de producir biofilm. Se mostró que, una vez que se ha producido una primera adhesión homogénea de las bacterias al sustrato, podemos considerar el biofilm como una capa semi-infinita, de la cual el sensor de cuarzo refleja las propiedades viscoelásticas de la región inmediatamente contigua al resonador, no siendo sensible a lo que sucede en estratos superiores del biofilm. Los experimentos han permitido caracterizar el módulo de rigidez complejo de los biofilms a varias frecuencias, mostrándose que el parámetro característico que indica la adhesión de un biofilm tanto en el caso de S. epidermidis como de E. coli, es el incremento de G’ (relacionado con la elasticidad o rigidez de la capa), el cual viene ligado a un incremento de la frecuencia de resonancia del sensor. ABSTRACT This thesis reviews and analyzes some key aspects of the behavior of sensors based on piezoelectric resonators TSM (Thickness Shear Mode) and their applications to the study and characterization in two viscoelastic media of great interest: magnetorheological fluids and microbial biofilms. The operation of these sensors is based on the analysis of their resonant properties that vary in contact with the material to be analyzed. We have made a multi-frequency analysis, working in several modes of resonance of the transducer, in some applications even simultaneously (by impulse excitation). We reviewed some phenomena as the presence of micro-contacts on the sensor surface and the resonance of viscoelastic layers of finite thickness, which can affect quartz sensors contrary to the conventional theory predictions (Sauerbrey and Kanazawa), leading to positive resonant frequency shifts. In addition, we studied the effect of non-uniform deposition on the piezoelectric resonator. Polyurethane stools have been measured, being the resonator response to these depositions modeled by FEM. The numerical model allows studying the behavior of the resonator when different geometric variables (thickness, surface non-uniformity and deposition zone) of the deposited layer are modified. It has been shown that for thicknesses between a quarter and a half of a wavelength approximately, non-uniform deposits on the sensor surface amplify the positive increase of the resonance frequency displacement compared to a uniform layer. The geometric pattern of the sensor sensitivity was also analyzed, being also non-uniform over its surface. TSM sensors have been applied to study the viscoelastic changes occurring in various magneto-rheological fluids (FMR) when subjected to different controlled shear stresses driven by a rheometer. It has been seen that there is a direct relationship between various rheological parameters obtained with the rheometer (normal force, G', G'', stress, shear rate ...) and the acoustic parameters, being the FMR characterized both in the absence of magnetic field, and when the magnetic field was applied at different intensities. We have studied the advantages of this technique over the characterization methods based on commercial rheometers, noting that TSM sensors are more sensitive to some relevant aspects of the fluid as the deposition of particles (fluid stability), the breaking process of the structures formed in the FMR both in the presence and absence of magnetic field, and the rigidity of the micro-contacts appearing between particles and surfaces. TSM sensors have also been used to monitor in real time the formation of biofilms of Staphylococcus epidermidis and Escherichia coli on the quartz crystal resonators themselves without any coating, performing tests with strains having different ability to produce biofilm. It was shown that, once a first homogeneous adhesion of bacteria was produced on the substrate, the biofilm can be considered as a semi-infinite layer and the quartz sensor reflects only the viscoelastic properties of the region immediately adjacent to the resonator, not being sensitive to what is happening in upper layers of the biofilm. The experiments allow the evaluation of the biofilm complex stiffness module at various frequencies, showing that the characteristic parameter that indicates the adhesion of a biofilm for the case of both S. epidermidis and E. coli, is an increased G' (related to the elasticity or stiffness of the layer), which is linked to an increase in the resonance frequency of the sensor.
Resumo:
New experiments using scanning probe microcopies and advanced optical methods allow us to study molecules as individuals, not just as populations. The findings of these studies not only include the confirmation of results expected from studies of bulk matter, but also give substantially new information concerning the complexity of biomolecules or molecules in a structured environment. The technique lays the groundwork for achieving the control of an individual molecule’s motion. Ultimately, this work may lead to such practical applications as miniaturized sensors.
Resumo:
Comunicação apresentada no CYTEF 2016/VIII Congresso Ibérico | VI Congresso Ibero-Americano de Ciências e Técnicas do Frio, 3-6 maio 2016, Coimbra, Portugal
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
Ligand-directed signal bias offers opportunities for sculpting molecular events, with the promise of better, safer therapeutics. Critical to the exploitation of signal bias is an understanding of the molecular events coupling ligand binding to intracellular signaling. Activation of class B G protein-coupled receptors is driven by interaction of the peptide N terminus with the receptor core. To understand how this drives signaling, we have used advanced analytical methods that enable separation of effects on pathway-specific signaling from those that modify agonist affinity and mapped the functional consequence of receptor modification onto three-dimensional models of a receptor-ligand complex. This yields molecular insights into the initiation of receptor activation and the mechanistic basis for biased agonism. Our data reveal that peptide agonists can engage different elements of the receptor extracellular face to achieve effector coupling and biased signaling providing a foundation for rational design of biased agonists.
Resumo:
This dissertation studies the political economy of trade policy in a developing country, namely Turkey, under different economic and political regimes. The research analyzes the effects of these different regimes on the import structure, the trade policy and the industrialization process in Turkey and derives implications for aggregate welfare. ^ In the second chapter, the effects of trade liberalization policies on import demand are examined. Using disaggregated industry-level data, import demand elasticities for various sectors have been computed, analyzed under different economic regimes, and compared with those of developed countries. The results are statistically significant and reliable, and conform to the predictions of economic theory. Estimation of these elasticities is also a necessary ingredient for the third chapter of the dissertation. ^ The third chapter examines the predictions of the state-of-the-art “Protection For Sale” model of Grossman and Helpman (1994). Employing advanced econometric methods and a unique data set, strong support is found for the fundamental predictions of the model in the context of Turkey. Specifically, the government is found to attach a much higher weight to social welfare than to political contributions. This weight is higher under the democratic regime than under the dictatorship, a result potentially of interest to all researchers in the area of political economy. ^ The fourth chapter looks at the effects of industry concentration and import price shocks on protection, promotion and the choice of policy instruments in Turkey. In this context, it examines and finds support for the predictions of some well-known models in the literature. ^
Resumo:
Archaeologists are often considered frontrunners in employing spatial approaches within the social sciences and humanities, including geospatial technologies such as geographic information systems (GIS) that are now routinely used in archaeology. Since the late 1980s, GIS has mainly been used to support data collection and management as well as spatial analysis and modeling. While fruitful, these efforts have arguably neglected the potential contribution of advanced visualization methods to the generation of broader archaeological knowledge. This paper reviews the use of GIS in archaeology from a geographic visualization (geovisual) perspective and examines how these methods can broaden the scope of archaeological research in an era of more user-friendly cyber-infrastructures. Like most computational databases, GIS do not easily support temporal data. This limitation is particularly problematic in archaeology because processes and events are best understood in space and time. To deal with such shortcomings in existing tools, archaeologists often end up having to reduce the diversity and complexity of archaeological phenomena. Recent developments in geographic visualization begin to address some of these issues, and are pertinent in the globalized world as archaeologists amass vast new bodies of geo-referenced information and work towards integrating them with traditional archaeological data. Greater effort in developing geovisualization and geovisual analytics appropriate for archaeological data can create opportunities to visualize, navigate and assess different sources of information within the larger archaeological community, thus enhancing possibilities for collaborative research and new forms of critical inquiry.
Resumo:
In 2004, the National Institutes of Health made available the Patient-Reported Outcomes Measurement Information System – PROMIS®, which is constituted of innovative item banks for health assessment. It is based on classical, reliable Patient-Reported Outcomes (PROs) and includes advanced statistical methods, such as Item Response Theory and Computerized Adaptive Test. One of PROMIS® Domain Frameworks is the Physical Function, whose item bank need to be translated and culturally adapted so it can be used in Portuguese speaking countries. This work aimed to translate and culturally adapt the PROMIS® Physical Function item bank into Portuguese. FACIT (Functional Assessment of Chronic Illness Therapy) translation methodology, which is constituted of eight stages for translation and cultural adaptation, was used. Fifty subjects above the age of 18 years participated in the pre-test (seventh stage). The questionnaire was answered by the participants (self-reported questionnaires) by using think aloud protocol, and cognitive and retrospective interviews. In FACIT methodology, adaptations can be done since the beginning of the translation and cultural adaption process, ensuring semantic, conceptual, cultural, and operational equivalences of the Physical Function Domain. During the pre-test, 24% of the subjects had difficulties understanding the items, 22% of the subjects suggested changes to improve understanding. The terms and concepts of the items were totally understood (100%) in 87% of the items. Only four items had less than 80% of understanding; for this reason, it was necessary to chance them so they could have correspondence with the original item and be understood by the subjects, after retesting. The process of translation and cultural adaptation of the PROMIS® Physical Function item bank into Portuguese was successful. This version of the assessment tool must have its psychometric properties validated before being made available for clinical use.
Resumo:
As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.
In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.
Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.
In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.
Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.
Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.
Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.
Resumo:
High throughput next generation sequencing, together with advanced molecular methods, has considerably enhanced the field of food microbiology. By overcoming biases associated with culture dependant approaches, it has become possible to achieve novel insights into the nature of food-borne microbial communities. In this thesis, several different sequencing-based approaches were applied with a view to better understanding microbe associated quality defects in cheese. Initially, a literature review provides an overview of microbe-associated cheese quality defects as well as molecular methods for profiling complex microbial communities. Following this, 16S rRNA sequencing revealed temporal and spatial differences in microbial composition due to the time during the production day that specific commercial cheeses were manufactured. A novel Ion PGM sequencing approach, focusing on decarboxylase genes rather than 16S rRNA genes, was then successfully employed to profile the biogenic amine producing cohort of a series of artisanal cheeses. Investigations into the phenomenon of cheese pinking formed the basis of a joint 16S rRNA and whole genome shotgun sequencing approach, leading to the identification of Thermus species and, more specifically, the pathway involved in production of lycopene, a red coloured carotenoid. Finally, using a more traditional approach, the effect of addition of a facultatively heterofermentative Lactobacillus (Lactobacillus casei) to a Swiss-type cheese, in which starter activity was compromised, was investigated from the perspective of its ability to promote gas defects and irregular eye formation. X-ray computed tomography was used to visualise, using a non-destructive method, the consequences of the undesirable gas formation that resulted. Ultimately this thesis has demonstrated that the application of molecular techniques, such as next generation sequencing, can provide a detailed insight into defect-causing microbial populations present and thereby may underpin approaches to optimise the quality and consistency of a wide variety of cheeses.
Resumo:
L’utilisation de méthodes d’investigation cérébrale avancées a permis de mettre en évidence la présence d’altérations à court et à long terme à la suite d’une commotion cérébrale. Plus spécifiquement, des altérations affectant l’intégrité de la matière blanche et le métabolisme cellulaire ont récemment été révélées par l’utilisation de l’imagerie du tenseur de diffusion (DTI) et la spectroscopie par résonance magnétique (SRM), respectivement. Ces atteintes cérébrales ont été observées chez des athlètes masculins quelques jours après la blessure à la tête et demeuraient détectables lorsque les athlètes étaient à nouveau évalués six mois post-commotion. En revanche, aucune étude n’a évalué les effets neurométaboliques et microstructuraux dans la phase aigüe et chronique d’une commotion cérébrale chez les athlètes féminines, malgré le fait qu’elles présentent une susceptibilité accrue de subir ce type de blessure, ainsi qu’un nombre plus élevé de symptômes post-commotionnels et un temps de réhabilitation plus long. Ainsi, les études composant le présent ouvrage visent globalement à établir le profil d’atteintes microstructurales et neurométaboliques chez des athlètes féminines par l’utilisation du DTI et de la SRM. La première étude visait à évaluer les changements neurométaboliques au sein du corps calleux chez des joueurs et joueuses de hockey au cours d’une saison universitaire. Les athlètes ayant subi une commotion cérébrale pendant la saison ont été évalués 72 heures, 2 semaines et 2 mois après la blessure à la tête en plus des évaluations pré et post-saison. Les résultats démontrent une absence de différences entre les athlètes ayant subi une commotion cérébrale et les athlètes qui n’en ont pas subie. De plus, aucune différence entre les données pré et post-saison a été observée chez les athlètes masculins alors qu’une diminution du taux de N-acetyl aspartate (NAA) n’a été mise en évidence chez les athlètes féminines, suggérant ainsi un impact des coups d’intensité sous-clinique à la tête. La deuxième étude, qui utilisait le DTI et la SRM, a révélé des atteintes chez des athlètes féminines commotionnées asymptomatiques en moyenne 18 mois post-commotion. Plus spécifiquement, la SRM a révélé une diminution du taux de myo-inositol (mI) au sein de l’hippocampe et du cortex moteur primaire (M1) alors que le DTI a mis en évidence une augmentation de la diffusivité moyenne (DM) dans plusieurs faisceaux de matière blanche. De iii plus, une approche par région d’intérêt a mis en évidence une diminution de la fraction d’anisotropie (FA) dans la partie du corps calleux projetant vers l’aire motrice primaire. Le troisième article évaluait des athlètes ayant subi une commotion cérébrale dans les jours suivant la blessure à la tête (7-10 jours) ainsi que six mois post-commotion avec la SRM. Dans la phase aigüe, des altérations neuropsychologiques combinées à un nombre significativement plus élevé de symptômes post-commotionnels et dépressifs ont été trouvés chez les athlètes féminines commotionnées, qui se résorbaient en phase chronique. En revanche, aucune différence sur le plan neurométabolique n’a été mise en évidence entre les deux groupes dans la phase aigüe. Dans la phase chronique, les athlètes commotionnées démontraient des altérations neurométaboliques au sein du cortex préfrontal dorsolatéral (CPDL) et M1, marquées par une augmentation du taux de glutamate/glutamine (Glx). De plus, une diminution du taux de NAA entre les deux temps de mesure était présente chez les athlètes contrôles. Finalement, le quatrième article documentait les atteintes microstructurales au sein de la voie corticospinale et du corps calleux six mois suivant une commotion cérébrale. Les analyses n’ont démontré aucune différence au sein de la voie corticospinale alors que des différences ont été relevées par segmentation du corps calleux selon les projections des fibres calleuses. En effet, les athlètes commotionnées présentaient une diminution de la DM et de la diffusivité radiale (DR) au sein de la région projetant vers le cortex préfrontal, un volume moindre des fibres de matière blanche dans la région projetant vers l’aire prémotrice et l’aire motrice supplémentaire, ainsi qu’une diminution de la diffusivité axiale (DA) dans la région projetant vers l’aire pariétale et temporale. En somme, les études incluses dans le présent ouvrage ont permis d’approfondir les connaissances sur les effets métaboliques et microstructuraux des commotions cérébrales et démontrent des effets délétères persistants chez des athlètes féminines. Ces données vont de pair avec la littérature scientifique qui suggère que les commotions cérébrales n’entraînent pas seulement des symptômes temporaires.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
The recently discovered abilities to synthesize single-walled carbon nanotubes and prepare single layer graphene have spurred interest in these sp2-bonded carbon nanostructures. In particular, studies of their potential use in electronic devices are many as silicon integrated circuits are encountering processing limitations, quantum effects, and thermal management issues due to rapid device scaling. Nanotube and graphene implementation in devices does come with significant hurdles itself. Among these issues are the ability to dope these materials and understanding what influences defects have on expected properties. Because these nanostructures are entirely all-surface, with every atom exposed to ambient, introduction of defects and doping by chemical means is expected to be an effective route for addressing these issues. Raman spectroscopy has been a proven characterization method for understanding vibrational and even electronic structure of graphene, nanotubes, and graphite, especially when combined with electrical measurements, due to a wealth of information contained in each spectrum. In Chapter 1, a discussion of the electronic structure of graphene is presented. This outlines the foundation for all sp2-bonded carbon electronic properties and is easily extended to carbon nanotubes. Motivation for why these materials are of interest is readily gained. Chapter 2 presents various synthesis/preparation methods for both nanotubes and graphene, discusses fabrication techniques for making devices, and describes characterization methods such as electrical measurements as well as static and time-resolved Raman spectroscopy. Chapter 3 outlines changes in the Raman spectra of individual metallic single-walled carbon nantoubes (SWNTs) upon sidewall covalent bond formation. It is observed that the initial degree of disorder has a strong influence on covalent sidewall functionalization which has implications on developing electronically selective covalent chemistries and assessing their selectivity in separating metallic and semiconducting SWNTs. Chapter 4 describes how optical phonon population extinction lifetime is affected by covalent functionalization and doping and includes discussions on static Raman linewidths. Increasing defect concentration is shown to decrease G-band phonon population lifetime and increase G-band linewidth. Doping only increases G-band linewidth, leaving non-equilibrium population decay rate unaffected. Phonon mediated electron scattering is especially strong in nanotubes making optical phonon decay of interest for device applications. Optical phonon decay also has implications on device thermal management. Chapter 5 treats doping of graphene showing ambient air can lead to inadvertent Fermi level shifts which exemplifies the sensitivity that sp2-bonded carbon nanostructures have to chemical doping through sidewall adsorption. Removal of this doping allows for an investigation of electron-phonon coupling dependence on temperature, also of interest for devices operating above room temperature. Finally, in Chapter 6, utilizing the information obtained in previous chapters, single carbon nanotube diodes are fabricated and characterized. Electrical performance shows these diodes are nearly ideal and photovoltaic response yields 1.4 nA and 205 mV of short circuit current and open circuit voltage from a single nanotube device. A summary and discussion of future directions in Chapter 7 concludes my work.
Resumo:
International audience
Resumo:
Partially encased columns have significant fire resistant. However, it is not possible to assess the fire resistance of such members simply by considering the temperature of the steel. The presence of concrete increases the mass and thermal inertia of the member and the variation of temperature within the cross section, in both the steel and concrete components. The annex G of EN1994-1-2 allows to calculate the load carrying capacity of partially encased columns, for a specific fire rating time, considering the balanced summation method. New formulas will be used to calculate the plastic resistance to axial compression and the effective flexural stiffness. These two parameters are used to calculate the buckling resistance. The finite element method is used to compare the results of the elastic critical load for different fire ratings of 30 and 60 minutes. The buckling resistance is also calculated by the finite element method, using an incremental and iterative procedure. This buckling resistance is also compared with the simple calculation method, evaluating the design buckling curve that best fits the results.