862 resultados para beam-to-column connections


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El principal objetivo de esta tesis es el desarrollo de métodos de síntesis de diagramas de radiación de agrupaciones de antenas, en donde se realiza una caracterización electromagnética rigurosa de los elementos radiantes y de los acoplos mutuos existentes. Esta caracterización no se realiza habitualmente en la gran mayoría de métodos de síntesis encontrados en la literatura, debido fundamentalmente a dos razones. Por un lado, se considera que el diagrama de radiación de un array de antenas se puede aproximar con el factor de array que únicamente tiene en cuenta la posición de los elementos y las excitaciones aplicadas a los mismos. Sin embargo, como se mostrará en esta tesis, en múltiples ocasiones un riguroso análisis de los elementos radiantes y del acoplo mutuo entre ellos es importante ya que los resultados obtenidos pueden ser notablemente diferentes. Por otro lado, no es sencillo combinar un método de análisis electromagnético con un proceso de síntesis de diagramas de radiación. Los métodos de análisis de agrupaciones de antenas suelen ser costosos computacionalmente, ya que son estructuras grandes en términos de longitudes de onda. Generalmente, un diseño de un problema electromagnético suele comprender varios análisis de la estructura, dependiendo de las variaciones de las características, lo que hace este proceso muy costoso. Dos métodos se utilizan en esta tesis para el análisis de los arrays acoplados. Ambos están basados en el método de los elementos finitos, la descomposición de dominio y el análisis modal para analizar la estructura radiante y han sido desarrollados en el grupo de investigación donde se engloba esta tesis. El primero de ellos es una técnica de análisis de arrays finitos basado en la aproximación de array infinito. Su uso es indicado para arrays planos de grandes dimensiones con elementos equiespaciados. El segundo caracteriza el array y el acoplo mutuo entre elementos a partir de una expansión en modos esféricos del campo radiado por cada uno de los elementos. Este método calcula los acoplos entre los diferentes elementos del array usando las propiedades de traslación y rotación de los modos esféricos. Es capaz de analizar agrupaciones de elementos distribuidos de forma arbitraria. Ambas técnicas utilizan una formulación matricial que caracteriza de forma rigurosa el campo radiado por el array. Esto las hace muy apropiadas para su posterior uso en una herramienta de diseño, como los métodos de síntesis desarrollados en esta tesis. Los resultados obtenidos por estas técnicas de síntesis, que incluyen métodos rigurosos de análisis, son consecuentemente más precisos. La síntesis de arrays consiste en modificar uno o varios parámetros de las agrupaciones de antenas buscando unas determinadas especificaciones de las características de radiación. Los parámetros utilizados como variables de optimización pueden ser varios. Los más utilizados son las excitaciones aplicadas a los elementos, pero también es posible modificar otros parámetros de diseño como son las posiciones de los elementos o las rotaciones de estos. Los objetivos de las síntesis pueden ser dirigir el haz o haces en una determinada dirección o conformar el haz con formas arbitrarias. Además, es posible minimizar el nivel de los lóbulos secundarios o del rizado en las regiones deseadas, imponer nulos que evitan posibles interferencias o reducir el nivel de la componente contrapolar. El método para el análisis de arrays finitos basado en la aproximación de array infinito considera un array finito como un array infinito con un número finito de elementos excitados. Los elementos no excitados están físicamente presentes y pueden presentar tres diferentes terminaciones, corto-circuito, circuito abierto y adaptados. Cada una de estas terminaciones simulará mejor el entorno real en el que el array se encuentre. Este método de análisis se integra en la tesis con dos métodos diferentes de síntesis de diagramas de radiación. En el primero de ellos se presenta un método basado en programación lineal en donde es posible dirigir el haz o haces, en la dirección deseada, además de ejercer un control sobre los lóbulos secundarios o imponer nulos. Este método es muy eficiente y obtiene soluciones óptimas. El mismo método de análisis es también aplicado a un método de conformación de haz, en donde un problema originalmente no convexo (y de difícil solución) es transformado en un problema convexo imponiendo restricciones de simetría, resolviendo de este modo eficientemente un problema complejo. Con este método es posible diseñar diagramas de radiación con haces de forma arbitraria, ejerciendo un control en el rizado del lóbulo principal, así como en el nivel de los lóbulos secundarios. El método de análisis de arrays basado en la expansión en modos esféricos se integra en la tesis con tres técnicas de síntesis de diagramas de radiación. Se propone inicialmente una síntesis de conformación del haz basado en el método de la recuperación de fase resuelta de forma iterativa mediante métodos convexos, en donde relajando las restricciones del problema original se consiguen unas soluciones cercanas a las óptimas de manera eficiente. Dos métodos de síntesis se han propuesto, donde las variables de optimización son las posiciones y las rotaciones de los elementos respectivamente. Se define una función de coste basada en la intensidad de radiación, la cual es minimizada de forma iterativa con el método del gradiente. Ambos métodos reducen el nivel de los lóbulos secundarios minimizando una función de coste. El gradiente de la función de coste es obtenido en términos de la variable de optimización en cada método. Esta función de coste está formada por la expresión rigurosa de la intensidad de radiación y por una función de peso definida por el usuario para imponer prioridades sobre las diferentes regiones de radiación, si así se desea. Por último, se presenta un método en el cual, mediante técnicas de programación entera, se buscan las fases discretas que generan un diagrama de radiación lo más cercano posible al deseado. Con este método se obtienen diseños que minimizan el coste de fabricación. En cada uno de las diferentes técnicas propuestas en la tesis, se presentan resultados con elementos reales que muestran las capacidades y posibilidades que los métodos ofrecen. Se comparan los resultados con otros métodos disponibles en la literatura. Se muestra la importancia de tener en cuenta los diagramas de los elementos reales y los acoplos mutuos en el proceso de síntesis y se comparan los resultados obtenidos con herramientas de software comerciales. ABSTRACT The main objective of this thesis is the development of optimization methods for the radiation pattern synthesis of array antennas in which a rigorous electromagnetic characterization of the radiators and the mutual coupling between them is performed. The electromagnetic characterization is usually overlooked in most of the available synthesis methods in the literature, this is mainly due to two reasons. On the one hand, it is argued that the radiation pattern of an array is mainly influenced by the array factor and that the mutual coupling plays a minor role. As it is shown in this thesis, the mutual coupling and the rigorous characterization of the array antenna influences significantly in the array performance and its computation leads to differences in the results obtained. On the other hand, it is difficult to introduce an analysis procedure into a synthesis technique. The analysis of array antennas is generally expensive computationally as the structure to analyze is large in terms of wavelengths. A synthesis method requires to carry out a large number of analysis, this makes the synthesis problem very expensive computationally or intractable in some cases. Two methods have been used in this thesis for the analysis of coupled antenna arrays, both of them have been developed in the research group in which this thesis is involved. They are based on the finite element method (FEM), the domain decomposition and the modal analysis. The first one obtains a finite array characterization with the results obtained from the infinite array approach. It is specially indicated for the analysis of large arrays with equispaced elements. The second one characterizes the array elements and the mutual coupling between them with a spherical wave expansion of the radiated field by each element. The mutual coupling is computed using the properties of translation and rotation of spherical waves. This method is able to analyze arrays with elements placed on an arbitrary distribution. Both techniques provide a matrix formulation that makes them very suitable for being integrated in synthesis techniques, the results obtained from these synthesis methods will be very accurate. The array synthesis stands for the modification of one or several array parameters looking for some desired specifications of the radiation pattern. The array parameters used as optimization variables are usually the excitation weights applied to the array elements, but some other array characteristics can be used as well, such as the array elements positions or rotations. The desired specifications may be to steer the beam towards any specific direction or to generate shaped beams with arbitrary geometry. Further characteristics can be handled as well, such as minimize the side lobe level in some other radiating regions, to minimize the ripple of the shaped beam, to take control over the cross-polar component or to impose nulls on the radiation pattern to avoid possible interferences from specific directions. The analysis method based on the infinite array approach considers an infinite array with a finite number of excited elements. The infinite non-excited elements are physically present and may have three different terminations, short-circuit, open circuit and match terminated. Each of this terminations is a better simulation for the real environment of the array. This method is used in this thesis for the development of two synthesis methods. In the first one, a multi-objective radiation pattern synthesis is presented, in which it is possible to steer the beam or beams in desired directions, minimizing the side lobe level and with the possibility of imposing nulls in the radiation pattern. This method is very efficient and obtains optimal solutions as it is based on convex programming. The same analysis method is used in a shaped beam technique in which an originally non-convex problem is transformed into a convex one applying symmetry restrictions, thus solving a complex problem in an efficient way. This method allows the synthesis of shaped beam radiation patterns controlling the ripple in the mainlobe and the side lobe level. The analysis method based on the spherical wave expansion is applied for different synthesis techniques of the radiation pattern of coupled arrays. A shaped beam synthesis is presented, in which a convex formulation is proposed based on the phase retrieval method. In this technique, an originally non-convex problem is solved using a relaxation and solving a convex problems iteratively. Two methods are proposed based on the gradient method. A cost function is defined involving the radiation intensity of the coupled array and a weighting function that provides more degrees of freedom to the designer. The gradient of the cost function is computed with respect to the positions in one of them and the rotations of the elements in the second one. The elements are moved or rotated iteratively following the results of the gradient. A highly non-convex problem is solved very efficiently, obtaining very good results that are dependent on the starting point. Finally, an optimization method is presented where discrete digital phases are synthesized providing a radiation pattern as close as possible to the desired one. The problem is solved using linear integer programming procedures obtaining array designs that greatly reduce the fabrication costs. Results are provided for every method showing the capabilities that the above mentioned methods offer. The results obtained are compared with available methods in the literature. The importance of introducing a rigorous analysis into the synthesis method is emphasized and the results obtained are compared with a commercial software, showing good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wavelet transform and Lipschitz exponent perform well in detecting signal singularity.With the bridge crack damage modeled as rotational springs based on fracture mechanics, the deflection time history of the beam under the moving load is determined with a numerical method. The continuous wavelet transformation (CWT) is applied to the deflection of the beam to identify the location of the damage, and the Lipschitz exponent is used to evaluate the damage degree. The influence of different damage degrees,multiple damage, different sensor locations, load velocity and load magnitude are studied.Besides, the feasibility of this method is verified by a model experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brasília impõe-se como um interessante caso a ser estudado. Distingue-se de todas as demais cidades do Brasil (e talvez do mundo), mas, ainda assim, reflete a quase totalidade das complexas questões que atingem qualquer grande cidade. A análise de seus processos de patrimonialização e de gestão territorial-patrimonial revela riqueza e singularidade históricas, além de revelar contradições e alta complexidade. O presente ensaio, de cunho teórico-exploratório, propõe-se a analisar o Plano de Preservação do Conjunto Urbanístico de Brasília – PPCUB, projeto de Lei Complementar proposto como política pública de proteção do patrimônio urbano, em busca de relações e conexões entre o contexto sociopolítico de construção da cidade ideal, seu desenvolvimento real e sua consagração como Patrimônio Mundial. Apenas por meio da sinergia de interesses e da ação conjunta dos diversos atores/agentes envolvidos na dinâmica urbana será possível transformar a excepcionalidade e o valor universal em acessibilidade territorial e valor local, dois pilares do desenvolvimento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review progress at the Australian Centre for Quantum Computer Technology towards the fabrication and demonstration of spin qubits and charge qubits based on phosphorus donor atoms embedded in intrinsic silicon. Fabrication is being pursued via two complementary pathways: a 'top-down' approach for near-term production of few-qubit demonstration devices and a 'bottom-up' approach for large-scale qubit arrays with sub-nanometre precision. The 'top-down' approach employs a low-energy (keV) ion beam to implant the phosphorus atoms. Single-atom control during implantation is achieved by monitoring on-chip detector electrodes, integrated within the device structure. In contrast, the 'bottom-up' approach uses scanning tunnelling microscope lithography and epitaxial silicon overgrowth to construct devices at an atomic scale. In both cases, surface electrodes control the qubit using voltage pulses, and dual single-electron transistors operating near the quantum limit provide fast read-out with spurious-signal rejection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human brain assembles an incredible network of over a billion neurons. Understanding how these connections form during development in order for the brain to function properly is a fundamental question in biology. Much of this wiring takes place during embryonic development. Neurons are generated in the ventricular zone, migrate out, and begin to differentiate. However, neurons are often born in locations some distance from the target cells with which they will ultimately form connections. To form connections, neurons project long axons tipped with a specialized sensing device called a growth cone. The growing axons interact directly with molecules within the environment through which they grow. In order to find their targets, axonal growth cones use guidance molecules that can either attract or repel them. Understanding what these guidance cues are, where they are expressed, and how the growth cone is able to transduce their signal in a directionally specific manner is essential to understanding how the functional brain is constructed. In this chapter, we review what is known about the mechanisms involved in axonal guidance. We discuss how the growth cone is able to sense and respond to its environment and how it is guided by pioneering cells and axons. As examples, we discuss current models for the development of the spinal cord, the cerebral cortex, and the visual and olfactory systems. (c) 2005, Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ion implantation of normally insulating polymers offers an alternative to depositing conjugated organics onto plastic films to make electronic circuits. We used a 50 keV nitrogen ion beam to mix a thin 10 nm Sn/Sb alloy film into the subsurface of polyetheretherketone and report the low temperature properties of this material. We observed metallic behavior, and the onset of superconductivity below 3 K. There are strong indications that the superconductivity does not result from a residual thin film of alloy, but instead from a network of alloy grains coupled via a weakly conducting, ion-beam carbonized polymer matrix. (c) 2006 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a technique to measure the viscosity of microscopic volumes of liquid using rotating optical tweezers. The technique can be used when only microlitre (or less) sample volumes are available, for example biological or medical samples, or to make local measurements in complicated micro-structures such as cells. The rotation of the optical tweezers is achieved using the polarisation of the trapping light to rotate a trapped birefringent spherical crystal, called vaterite. Transfer of angular momentum from a circularly polarised beam to the particle causes the rotation. The transmitted light can then be analysed to determine the applied torque to the particle and its rotation rate. The applied torque is determined from the change in the circular polarisation of the beam caused by the vaterite and the rotation rate is used to find the viscous drag on the rotating spherical particle. The viscosity of the surrounding liquid can then be determined. Using this technique we measured the viscosity of liquids at room temperature, which agree well with tabulated values. We also study the local heating effects due to absorption of the trapping laser beam. We report heating of 50-70 K/W in the region of liquid surrounding the particle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new surface analysis technique has been developed which has a number of benefits compared to conventional Low Energy Ion Scattering Spectrometry (LEISS). A major potential advantage arising from the absence of charge exchange complications is the possibility of quantification. The instrumentation that has been developed also offers the possibility of unique studies concerning the interaction between low energy ions and atoms and solid surfaces. From these studies it may also be possible, in principle, to generate sensitivity factors to quantify LEISS data. The instrumentation, which is referred to as a Time-of-Flight Fast Atom Scattering Spectrometer has been developed to investigate these conjecture in practice. The development, involved a number of modifications to an existing instrument, and allowed samples to be bombarded with a monoenergetic pulsed beam of either atoms or ions, and provided the capability to analyse the spectra of scattered atoms and ions separately. Further to this a system was designed and constructed to allow incident, exit and azimuthal angles of the particle beam to be varied independently. The key development was that of a pulsed, and mass filtered atom source; which was developed by a cyclic process of design, modelling and experimentation. Although it was possible to demonstrate the unique capabilities of the instrument, problems relating to surface contamination prevented the measurement of the neutralisation probabilities. However, these problems appear to be technical rather than scientific in nature, and could be readily resolved given the appropriate resources. Experimental spectra obtained from a number of samples demonstrate some fundamental differences between the scattered ion and neutral spectra. For practical non-ordered surfaces the ToF spectra are more complex than their LEISS counterparts. This is particularly true for helium scattering where it appears, in the absence of detailed computer simulation, that quantitative analysis is limited to ordered surfaces. Despite this limitation the ToFFASS instrument opens the way for quantitative analysis of the 'true' surface region to a wider range of surface materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current analytical assay methods for ampicillin sodium and cloxacillin sodium are discussed and compared, High Performance Liquid Chromatography (H.P.L.C.) being chosen as the most accurate, specific and precise. New H.P.L.C. methods for the analysis of benzathine cloxacillin; benzathine penicillin V; procaine penicillin injection B.P.; benethamine penicillin injection; fortified B.P.C.; benzathine penicillin injection; benzathine penicillin injection, fortified B.P.C.; benzathine penicillin suspnsion; ampicillin syrups and penicillin syrups are described. Mechanical or chemical damage to column packings is often associated with H.P.L.C. analysis. One type, that of channel formation, is investigated. The high linear velocity of solvent and solvent pulsing during the pumping cycle were found to be the cause of this damage. The applicability of nonisotherrnal kinetic experiments to penicillin V preparations, including formulated paediatric syrups, is evaluated. A new type of nonisotherrnal analysis, based on slope estimation and using a 64K Random Access Memory (R.A.M.) microcomputer is described. The name of the program written for this analysis is NONISO. The distribution of active penicillin in granules for reconstitution into ampicillin and penicillin V syrups, and its effect on the stability of the reconstituted products, are investigated. Changing the diluent used to reconstitue the syrups was found to affect the stability of the product. Dissolution and stability of benzathine cloxacillin at pH2, pH6 and pH9 is described, with proposed dissolution mechanisms and kinetic analysis to support these mechanisms. Benzathine and cloxacillin were found to react in solution at pH9, producing an insoluble amide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Networked Learning, e-Learning and Technology Enhanced Learning have each been defined in different ways, as people's understanding about technology in education has developed. Yet each could also be considered as a terminology competing for a contested conceptual space. Theoretically this can be a ‘fertile trans-disciplinary ground for represented disciplines to affect and potentially be re-orientated by others’ (Parchoma and Keefer, 2012), as differing perspectives on terminology and subject disciplines yield new understandings. Yet when used in government policy texts to describe connections between humans, learning and technology, terms tend to become fixed in less fertile positions linguistically. A deceptively spacious policy discourse that suggests people are free to make choices conceals an economically-based assumption that implementing new technologies, in themselves, determines learning. Yet it actually narrows choices open to people as one route is repeatedly in the foreground and humans are not visibly involved in it. An impression that the effective use of technology for endless improvement is inevitable cuts off critical social interactions and new knowledge for multiple understandings of technology in people's lives. This paper explores some findings from a corpus-based Critical Discourse Analysis of UK policy for educational technology during the last 15 years, to help to illuminate the choices made. This is important when through political economy, hierarchical or dominant neoliberal logic promotes a single ‘universal model’ of technology in education, without reference to a wider social context (Rustin, 2013). Discourse matters, because it can ‘mould identities’ (Massey, 2013) in narrow, objective economically-based terms which 'colonise discourses of democracy and student-centredness' (Greener and Perriton, 2005:67). This undermines subjective social, political, material and relational (Jones, 2012: 3) contexts for those learning when humans are omitted. Critically confronting these structures is not considered a negative activity. Whilst deterministic discourse for educational technology may leave people unconsciously restricted, I argue that, through a close analysis, it offers a deceptively spacious theoretical tool for debate about the wider social and economic context of educational technology. Methodologically it provides insights about ways technology, language and learning intersect across disciplinary borders (Giroux, 1992), as powerful, mutually constitutive elements, ever-present in networked learning situations. In sharing a replicable approach for linguistic analysis of policy discourse I hope to contribute to visions others have for a broader theoretical underpinning for educational technology, as a developing field of networked knowledge and research (Conole and Oliver, 2002; Andrews, 2011).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focusing of multimode laser diode beams is probably the most significant problem that hinders the expansion of the high-power semiconductor lasers in many spatially-demanding applications. Generally, the 'quality' of laser beams is characterized by so-called 'beam propagation parameter' M2, which is defined as the ratio of the divergence of the laser beam to that of a diffraction-limited counterpart. Therefore, M2 determines the ratio of the beam focal-spot size to that of the 'ideal' Gaussian beam focused by the same optical system. Typically, M2 takes the value of 20-50 for high-power broad-stripe laser diodes thus making the focal-spot 1-2 orders of magnitude larger than the diffraction limit. The idea of 'superfocusing' for high-M2 beams relies on a technique developed for the generation of Bessel beams from laser diodes using a cone-shaped lens (axicon). With traditional focusing of multimode radiation, different curvatures of the wavefronts of the various constituent modes lead to a shift of their focal points along the optical axis that in turn implies larger focal-spot sizes with correspondingly increased values of M2. In contrast, the generation of a Bessel-type beam with an axicon relies on 'self-interference' of each mode thus eliminating the underlying reason for an increase in the focal-spot size. For an experimental demonstration of the proposed technique, we used a fiber-coupled laser diode with M2 below 20 and an emission wavelength in ~1μm range. Utilization of the axicons with apex angle of 140deg, made by direct laser writing on a fiber tip, enabled the demonstration of an order of magnitude decrease of the focal-spot size compared to that achievable using an 'ideal' lens of unity numerical aperture. © 2014 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central aim of this interdisciplinary book is to make visible the intentionality behind the 'forgetting' of European women's contributions during the period between the two world wars in the context of politics, culture and society. It also seeks to record and analyse women's agency in the construction and reconstruction of Europe and its nation states after the First World War, and thus to articulate ways in which the writing of women's history necessarily entails the rewriting of everyone's history. By showing that the erasure of women's texts from literary and cultural history was not accidental but was ideologically motivated, the essays explicitly and implicitly contribute to debates surrounding canon formation. Other important topics are women's political activism during the period, antifascism, the contributions made by female journalists, the politics of literary production, genre, women's relationship with and contributions to the avant-garde, women's professional lives, and women's involvement in voluntary associations. In bringing together the work of scholars whose fields of expertise are diverse but whose interests converge on the inter-war period, the volume invites readers to make connections and comparisons across the whole spectrum of women's political, social, and cultural activities throughout Europe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A vállalatok társadalmi felelősségvállalásával (CSR) kapcsolatos alapelvek, kezdeményezések és tevékenységek kommunikációja a vállalati kommunikáció egyik sarkalatos pontjává vált szinte az egész világon. A cégek CSR-kezdeményezéseik bemutatásához egyre többször az internetet is igénybe veszik. Az on-line média használatával párhuzamosan az elmúlt évtizedben egyre többen kutatják a CSR-kommunikáció elektronikus formáit, jóllehet ezek a kutatások többnyire leíró jellegűek, és a CSR-kommunikáció, valamint egyes vállalati jellemzők (méret, iparág és más magyarázó változók) között keresnek kapcsolatot. A szerzők e cikkben a társadalmi felelősségvállalással foglalkozó vállalati weboldalakat kritikai szemüvegen keresztül vizsgálják. Céljuk, hogy feltárják az on-line kommunikációt jellemző belső ellentmondásokat és a vallott és követett értékek közötti különbségeket. _______ Communicating corporate social responsibility (CSR) principles, initiatives, and activities has become a common practice of companies all around the world. It is quite apparent that firms use internet more and more often to communicate their CSR initiatives to their stakeholders. Parallel with the extensive use of the online media, more and more research has been elaborated on the field of online CSR communication in the last decade as well. However, these studies usually have a strong descriptive focus trying to reveal connections between the intensity of online communication of CSR values and activities, and company size, industrial background, and other explanatory variables. In contrast, the authors analysed corporate web pages dedicated to CSR through critical lenses. Their research was designed to explore any dissonances and contradictions within online communication and between communication and real activities of firms from construction, retail, and telecommunication industries in Hungary.