11 resultados para Typical application
em Universidad Politécnica de Madrid
Resumo:
Esta tesis considera dos tipos de aplicaciones del diseño óptico: óptica formadora de imagen por un lado, y óptica anidólica (nonimaging) o no formadora de imagen, por otro. Las ópticas formadoras de imagen tienen como objetivo la obtención de imágenes de puntos del objeto en el plano de la imagen. Por su parte, la óptica anidólica, surgida del desarrollo de aplicaciones de concentración e iluminación, se centra en la transferencia de energía en forma de luz de forma eficiente. En general, son preferibles los diseños ópticos que den como resultado sistemas compactos, para ambos tipos de ópticas (formadora de imagen y anidólica). En el caso de los sistemas anidólicos, una óptica compacta permite tener costes de producción reducidos. Hay dos razones: (1) una óptica compacta presenta volúmenes reducidos, lo que significa que se necesita menos material para la producción en masa; (2) una óptica compacta es pequeña y ligera, lo que ahorra costes en el transporte. Para los sistemas ópticos de formación de imagen, además de las ventajas anteriores, una óptica compacta aumenta la portabilidad de los dispositivos, que es una gran ventaja en tecnologías de visualización portátiles, tales como cascos de realidad virtual (HMD del inglés Head Mounted Display). Esta tesis se centra por tanto en nuevos enfoques de diseño de sistemas ópticos compactos para aplicaciones tanto de formación de imagen, como anidólicas. Los colimadores son uno de los diseños clásicos dentro la óptica anidólica, y se pueden utilizar en aplicaciones fotovoltaicas y de iluminación. Hay varios enfoques a la hora de diseñar estos colimadores. Los diseños convencionales tienen una relación de aspecto mayor que 0.5. Con el fin de reducir la altura del colimador manteniendo el área de iluminación, esta tesis presenta un diseño de un colimador multicanal. En óptica formadora de imagen, las superficies asféricas y las superficies sin simetría de revolución (o freeform) son de gran utilidad de cara al control de las aberraciones de la imagen y para reducir el número y tamaño de los elementos ópticos. Debido al rápido desarrollo de sistemas de computación digital, los trazados de rayos se pueden realizar de forma rápida y sencilla para evaluar el rendimiento del sistema óptico analizado. Esto ha llevado a los diseños ópticos modernos a ser generados mediante el uso de diferentes técnicas de optimización multi-paramétricas. Estas técnicas requieren un buen diseño inicial como punto de partida para el diseño final, que será obtenido tras un proceso de optimización. Este proceso precisa un método de diseño directo para superficies asféricas y freeform que den como resultado un diseño cercano al óptimo. Un método de diseño basado en ecuaciones diferenciales se presenta en esta tesis para obtener un diseño óptico formado por una superficie freeform y dos superficies asféricas. Esta tesis consta de cinco capítulos. En Capítulo 1, se presentan los conceptos básicos de la óptica formadora de imagen y de la óptica anidólica, y se introducen las técnicas clásicas del diseño de las mismas. El Capítulo 2 describe el diseño de un colimador ultra-compacto. La relación de aspecto ultra-baja de este colimador se logra mediante el uso de una estructura multicanal. Se presentará su procedimiento de diseño, así como un prototipo fabricado y la caracterización del mismo. El Capítulo 3 describe los conceptos principales de la optimización de los sistemas ópticos: función de mérito y método de mínimos cuadrados amortiguados. La importancia de un buen punto de partida se demuestra mediante la presentación de un mismo ejemplo visto a través de diferentes enfoques de diseño. El método de las ecuaciones diferenciales se presenta como una herramienta ideal para obtener un buen punto de partida para la solución final. Además, diferentes técnicas de interpolación y representación de superficies asféricas y freeform se presentan para el procedimiento de optimización. El Capítulo 4 describe la aplicación del método de las ecuaciones diferenciales para un diseño de un sistema óptico de una sola superficie freeform. Algunos conceptos básicos de geometría diferencial son presentados para una mejor comprensión de la derivación de las ecuaciones diferenciales parciales. También se presenta un procedimiento de solución numérica. La condición inicial está elegida como un grado de libertad adicional para controlar la superficie donde se forma la imagen. Basado en este enfoque, un diseño anastigmático se puede obtener fácilmente y se utiliza como punto de partida para un ejemplo de diseño de un HMD con una única superficie reflectante. Después de la optimización, dicho diseño muestra mejor rendimiento. El Capítulo 5 describe el método de las ecuaciones diferenciales ampliado para diseños de dos superficies asféricas. Para diseños ópticos de una superficie, ni la superficie de imagen ni la correspondencia entre puntos del objeto y la imagen pueden ser prescritas. Con esta superficie adicional, la superficie de la imagen se puede prescribir. Esto conduce a un conjunto de tres ecuaciones diferenciales ordinarias implícitas. La solución numérica se puede obtener a través de cualquier software de cálculo numérico. Dicho procedimiento también se explica en este capítulo. Este método de diseño da como resultado una lente anastigmática, que se comparará con una lente aplanática. El diseño anastigmático converge mucho más rápido en la optimización y la solución final muestra un mejor rendimiento. ABSTRACT We will consider optical design from two points of view: imaging optics and nonimaging optics. Imaging optics focuses on the imaging of the points of the object. Nonimaging optics arose from the development of concentrators and illuminators, focuses on the transfer of light energy, and has wide applications in illumination and concentration photovoltaics. In general, compact optical systems are necessary for both imaging and nonimaging designs. For nonimaging optical systems, compact optics use to be important for reducing cost. The reasons are twofold: (1) compact optics is small in volume, which means less material is needed for mass-production; (2) compact optics is small in size and light in weight, which saves cost in transportation. For imaging optical systems, in addition to the above advantages, compact optics increases portability of devices as well, which contributes a lot to wearable display technologies such as Head Mounted Displays (HMD). This thesis presents novel design approaches of compact optical systems for both imaging and nonimaging applications. Collimator is a typical application of nonimaging optics in illumination, and can be used in concentration photovoltaics as well due to the reciprocity of light. There are several approaches for collimator designs. In general, all of these approaches have an aperture diameter to collimator height not greater than 2. In order to reduce the height of the collimator while maintaining the illumination area, a multichannel design is presented in this thesis. In imaging optics, aspheric and freeform surfaces are useful in controlling image aberrations and reducing the number and size of optical elements. Due to the rapid development of digital computing systems, ray tracing can be easily performed to evaluate the performance of optical system. This has led to the modern optical designs created by using different multi-parametric optimization techniques. These techniques require a good initial design to be a starting point so that the final design after optimization procedure can reach the optimum solution. This requires a direct design method for aspheric and freeform surface close to the optimum. A differential equation based design method is presented in this thesis to obtain single freeform and double aspheric surfaces. The thesis comprises of five chapters. In Chapter 1, basic concepts of imaging and nonimaging optics are presented and typical design techniques are introduced. Readers can obtain an understanding for the following chapters. Chapter 2 describes the design of ultra-compact collimator. The ultra-low aspect ratio of this collimator is achieved by using a multichannel structure. Its design procedure is presented together with a prototype and its evaluation. The ultra-compactness of the device has been approved. Chapter 3 describes the main concepts of optimizing optical systems: merit function and Damped Least-Squares method. The importance of a good starting point is demonstrated by presenting an example through different design approaches. The differential equation method is introduced as an ideal tool to obtain a good starting point for the final solution. Additionally, different interpolation and representation techniques for aspheric and freeform surface are presented for optimization procedure. Chapter 4 describes the application of differential equation method in the design of single freeform surface optical system. Basic concepts of differential geometry are presented for understanding the derivation of partial differential equations. A numerical solution procedure is also presented. The initial condition is chosen as an additional freedom to control the image surface. Based on this approach, anastigmatic designs can be readily obtained and is used as starting point for a single reflective surface HMD design example. After optimization, the evaluation shows better MTF. Chapter 5 describes the differential equation method extended to double aspheric surface designs. For single optical surface designs, neither image surface nor the mapping from object to image can be prescribed. With one more surface added, the image surface can be prescribed. This leads to a set of three implicit ordinary differential equations. Numerical solution can be obtained by MATLAB and its procedure is also explained. An anastigmatic lens is derived from this design method and compared with an aplanatic lens. The anastigmatic design converges much faster in optimization and the final solution shows better performance.
Resumo:
In this article an experimental campaign aimed at validating a previously published simplified serviceability design method of the columns of long jointless structures is presented. The proposed method is also extended to include tension stiffening effects which proved to be significant in structures with small amount of reinforcement subjected to small axial loading. This extension allows significant improvement of predictions for this type of element. The campaign involved columns with different reinforcement and squashing load ratios, given that these parameters had been identified as crucial when designing columns subjected to imposed displacements. Experimental results are presented and discussed, with particular regard to cracking behaviour and structural stiffness. Considerations on tension stiffening effects are also made. Finally, the application of the method to typical bridge and building cases is presented, showing the feasibility of jointless construction, and the limits which should be respected.
Resumo:
Software architectural evaluation is a key discipline used to identify, at early stages of a real-time system (RTS) development, the problems that may arise during its operation. Typical mechanisms supporting concurrency, such as semaphores, mutexes or monitors, usually lead to concurrency problems in execution time that are difficult to be identified, reproduced and solved. For this reason, it is crucial to understand the root causes of these problems and to provide support to identify and mitigate them at early stages of the system lifecycle. This paper aims to present the results of a research work oriented to the development of the tool called ‘Deadlock Risk Evaluation of Architectural Models’ (DREAM) to assess deadlock risk in architectural models of an RTS. A particular architectural style, Pipelines of Processes in Object-Oriented Architectures–UML (PPOOA) was used to represent platform-independent models of an RTS architecture supported by the PPOOA –Visio tool. We validated the technique presented here by using several case studies related to RTS development and comparing our results with those from other deadlock detection approaches, supported by different tools. Here we present two of these case studies, one related to avionics and the other to planetary exploration robotics. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
This research investigates the ultimate earthquake resistance of typical RC moment resisting frames designed accordingly to current standards, in terms of ultimate energy absorption/dissipation capacity. Shake table test of a 2/5 scale model, under several intensities of ground motion, are carried out. The loading effect of the earthquake is expressed as the total energy that the quake inputs to the structure, and the seismic resistance is interpreted as the amount of energy that the structure dissipates in terms of cumulative inelastic strain energy.
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
This paper presents a comprehensive review of stepup single-phase non-isolated inverters suitable for ac-module applications. In order to compare the most feasible solutions of the reviewed topologies, a benchmark is set. This benchmark is based on a typical ac-module application considering the requirements for the solar panels and the grid. The selected solutions are designed and simulated complying with the benchmark obtaining passive and semiconductor components ratings in order to perform a comparison in terms of size and cost. A discussion of the analyzed topologies regarding the obtained ratings as well as ground currents is presented. Recommendations for topological solutions complying with the application benchmark are provided.
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
En esta Tesis Doctoral se emplean y desarrollan Métodos Bayesianos para su aplicación en análisis geotécnicos habituales, con un énfasis particular en (i) la valoración y selección de modelos geotécnicos basados en correlaciones empíricas; en (ii) el desarrollo de predicciones acerca de los resultados esperados en modelos geotécnicos complejos. Se llevan a cabo diferentes aplicaciones a problemas geotécnicos, como es el caso de: (1) En el caso de rocas intactas, se presenta un método Bayesiano para la evaluación de modelos que permiten estimar el módulo de Young a partir de la resistencia a compresión simple (UCS). La metodología desarrollada suministra estimaciones de las incertidumbres de los parámetros y predicciones y es capaz de diferenciar entre las diferentes fuentes de error. Se desarrollan modelos "específicos de roca" para los tipos de roca más comunes y se muestra cómo se pueden "actualizar" esos modelos "iniciales" para incorporar, cuando se encuentra disponible, la nueva información específica del proyecto, reduciendo las incertidumbres del modelo y mejorando sus capacidades predictivas. (2) Para macizos rocosos, se presenta una metodología, fundamentada en un criterio de selección de modelos, que permite determinar el modelo más apropiado, entre un conjunto de candidatos, para estimar el módulo de deformación de un macizo rocoso a partir de un conjunto de datos observados. Una vez que se ha seleccionado el modelo más apropiado, se emplea un método Bayesiano para obtener distribuciones predictivas de los módulos de deformación de macizos rocosos y para actualizarlos con la nueva información específica del proyecto. Este método Bayesiano de actualización puede reducir significativamente la incertidumbre asociada a la predicción, y por lo tanto, afectar las estimaciones que se hagan de la probabilidad de fallo, lo cual es de un interés significativo para los diseños de mecánica de rocas basados en fiabilidad. (3) En las primeras etapas de los diseños de mecánica de rocas, la información acerca de los parámetros geomecánicos y geométricos, las tensiones in-situ o los parámetros de sostenimiento, es, a menudo, escasa o incompleta. Esto plantea dificultades para aplicar las correlaciones empíricas tradicionales que no pueden trabajar con información incompleta para realizar predicciones. Por lo tanto, se propone la utilización de una Red Bayesiana para trabajar con información incompleta y, en particular, se desarrolla un clasificador Naïve Bayes para predecir la probabilidad de ocurrencia de grandes deformaciones (squeezing) en un túnel a partir de cinco parámetros de entrada habitualmente disponibles, al menos parcialmente, en la etapa de diseño. This dissertation employs and develops Bayesian methods to be used in typical geotechnical analyses, with a particular emphasis on (i) the assessment and selection of geotechnical models based on empirical correlations; on (ii) the development of probabilistic predictions of outcomes expected for complex geotechnical models. Examples of application to geotechnical problems are developed, as follows: (1) For intact rocks, we present a Bayesian framework for model assessment to estimate the Young’s moduli based on their UCS. Our approach provides uncertainty estimates of parameters and predictions, and can differentiate among the sources of error. We develop ‘rock-specific’ models for common rock types, and illustrate that such ‘initial’ models can be ‘updated’ to incorporate new project-specific information as it becomes available, reducing model uncertainties and improving their predictive capabilities. (2) For rock masses, we present an approach, based on model selection criteria to select the most appropriate model, among a set of candidate models, to estimate the deformation modulus of a rock mass, given a set of observed data. Once the most appropriate model is selected, a Bayesian framework is employed to develop predictive distributions of the deformation moduli of rock masses, and to update them with new project-specific data. Such Bayesian updating approach can significantly reduce the associated predictive uncertainty, and therefore, affect our computed estimates of probability of failure, which is of significant interest to reliability-based rock engineering design. (3) In the preliminary design stage of rock engineering, the information about geomechanical and geometrical parameters, in situ stress or support parameters is often scarce or incomplete. This poses difficulties in applying traditional empirical correlations that cannot deal with incomplete data to make predictions. Therefore, we propose the use of Bayesian Networks to deal with incomplete data and, in particular, a Naïve Bayes classifier is developed to predict the probability of occurrence of tunnel squeezing based on five input parameters that are commonly available, at least partially, at design stages.
Resumo:
Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.
Resumo:
A Mindlin plate with periodically distributed ribs patterns is analyzed by using homogenization techniques based on asymptotic expansion methods. The stiffness matrix of the homogenized plate is found to be dependent on the geometrical characteristics of the periodical cell, i.e. its skewness, plan shape, thickness variation etc. and on the plate material elastic constants. The computation of this plate stiffness matrix is carried out by averaging over the cell domain some solutions of different periodical boundary value problems. These boundary value problems are defined in variational form by linear first order differential operators on the cell domain and the boundary conditions of the variational equation correspond to a periodic structural problem. The elements of the stiffness matrix of homogenized plate are obtained by linear combinations of the averaged solution functions of the above mentioned boundary value problems. Finally, an illustrative example of application of this homogenization technique to hollowed plates and plate structures with ribs patterns regularly arranged over its area is shown. The possibility of using in the profesional practice the present procedure to the actual analysis of floors of typical buildings is also emphasized.