58 resultados para In-band full-duplex

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays one of the challenges of materials science is to find new technologies that will be able to make the most of renewable energies. An example of new proposals in this field are the intermediate-band (IB) materials, which promise higher efficiencies in photovoltaic applications (through the intermediate band solar cells), or in heterogeneous photocatalysis (using nanoparticles of them, for the light-induced degradation of pollutants or for the efficient photoevolution of hydrogen from water). An IB material consists in a semiconductor in which gap a new level is introduced [1], the intermediate band (IB), which should be partially filled by electrons and completely separated of the valence band (VB) and of the conduction band (CB). This scheme (figure 1) allows an electron from the VB to be promoted to the IB, and from the latter to the CB, upon absorption of photons with energy below the band gap Eg, so that energy can be absorbed in a wider range of the solar spectrum and a higher current can be obtained without sacrificing the photovoltage (or the chemical driving force) corresponding to the full bandgap Eg, thus increasing the overall efficiency. This concept, applied to photocatalysis, would allow using photons of a wider visible range while keeping the same redox capacity. It is important to note that this concept differs from the classic photocatalyst doping principle, which essentially tries just to decrease the bandgap. This new type of materials would keep the full bandgap potential but would use also lower energy photons. In our group several IB materials have been proposed, mainly for the photovoltaic application, based on extensively doping known semiconductors with transition metals [2], examining with DFT calculations their electronic structures. Here we refer to In2S3 and SnS2, which contain octahedral cations; when doped with Ti or V an IB is formed according to quantum calculations (see e.g. figure 2). We have used a solvotermal synthesis method to prepare in nanocrystalline form the In2S3 thiospinel and the layered compound SnS2 (which when undoped have bandgaps of 2.0 and 2.2 eV respectively) where the cation is substituted by vanadium at a ?10% level. This substitution has been studied, characterizing the materials by different physical and chemical techniques (TXRF, XRD, HR-TEM/EDS) (see e.g. figure 3) and verifying with UV spectrometry that this substitution introduces in the spectrum the sub-bandgap features predicted by the calculations (figure 4). For both sulphide type nanoparticles (doped and undoped) the photocatalytic activity was studied by following at room temperature the oxidation of formic acid in aqueous suspension, a simple reaction which is easily monitored by UV-Vis spectroscopy. The spectral response of the process is measured using a collection of band pass filters that allow only some wavelengths into the reaction system. Thanks to this method the spectral range in which the materials are active in the photodecomposition (which coincides with the band gap for the undoped samples) can be checked, proving that for the vanadium substituted samples this range is increased, making possible to cover all the visible light range. Furthermore it is checked that these new materials are more photocorrosion resistant than the toxic CdS witch is a well know compound frequently used in tests of visible light photocatalysis. These materials are thus promising not only for degradation of pollutants (or for photovoltaic cells) but also for efficient photoevolution of hydrogen from water; work in this direction is now being pursued.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Underground coal mines explosions generally arise from the inflammation of a methane/air mixture. This explosion can also generate a subsequent coal dust explosion. Traditionally such explosions have being fought eliminating one or several of the factors needed by the explosion to take place. Although several preventive measures are taken to prevent explosions, other measures should be considered to reduce the effects or even to extinguish the flame front. Unlike other protection methods that remove one or two of the explosion triangle elements, namely; the ignition source, the oxidizing agent and the fuel, explosion barriers removes all of them: reduces the quantity of coal in suspension, cools the flame front and the steam generated by vaporization removes the oxygen present in the flame. Passive water barriers are autonomous protection systems against explosions that reduce to a satisfactory safety level the effects of methane and/or flammable dust explosions. The barriers are activated by the pressure wave provoked in the explosion destroying the barrier troughs and producing a uniform dispersion of the extinguishing agent throughout the gallery section in quantity enough to extinguish the explosion flame. Full scale tests have been carried out in Polish Barbara experimental mine at GIG Central Mining Institute in order to determine the requirements and the optimal installation conditions of these devices for small sections galleries which are very frequent in the Spanish coal mines. Full scale tests results have been analyzed to understand the explosion timing and development, in order to assess on the use of water barriers in the typical small crosssection Spanish galleries. Several arrangements of water barriers have been designed and tested to verify the effectiveness of the explosion suppression in each case. The results obtained demonstrate the efficiency of the water barriers in stopping the flame front even with smaller amounts of water than those established by the European standard. According to the tests realized, water barriers activation times are between 0.52 s and 0.78 s and the flame propagation speed are between 75 m/s and 80 m/s. The maximum pressures (Pmax) obtained in the full scale tests have varied between 0.2 bar and 1.8 bar. Passive barriers protect effectively against the spread of the flame but cannot be used as a safeguard of the gallery between the ignition source and the first row of water troughs or bags, or even after them, as the pressure could remain high after them even if the flame front has been extinguished.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system for estimation of unknown rectangular room dimensions based on two radio transceivers, both capable of full duplex operations, is presented. The approach is based on CIR measurements taken at the same place where the signal is transmitted (generated), commonly known as self- to-self CIR. Another novelty is the receiver antenna design which consists of eight sectorized antennas with 45° aperture in the horizontal plane, whose total coverage corresponds to the isotropic one. The dimensions of a rectangular room are reconstructed directly from radio impulse responses by extracting the information regarding features like round trip time, received signal strength and reverberation time. Using radar approach the estimation of walls and corners positions are derived. Additionally, the analysis of the absorption coefficient of the test environment is conducted and a typical coefficient for office room with furniture is proposed. Its accuracy is confirmed through the results of volume estimation. Tests using measured data were performed, and the simulation results confirm the feasibility of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system for simultaneous 2D estimation of rectangular room and transceiver localization is proposed. The system is based on two radio transceivers, both capable of full duplex operations (simultaneous transmission and reception). This property enables measurements of channel impulse response (CIR) at the same place the signal is transmitted (generated), commonly known as self-to-self CIR. Another novelty of the proposed system is the spatial CIR discrimination that is possible with the receiver antenna design which consists of eight sectorized antennas with 45° aperture in the horizontal plane and total coverage equal to the isotropic one. The dimensions of a rectangular room are reconstructed directly from spatial radio impulse responses by extracting the information regarding round trip time (RTT). Using radar approach estimation of walls and corners positions is derived. Tests using measured data were performed, and the simulation results confirm the feasibility of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to contribute to a further understanding of the fundamentals of crystallographic slip and grain boundary sliding in the γ-TiAl Ti–45Al–2Nb–2Mn (at%)–0.8 vol%TiB2 intermetallic alloy, by means of in situ high-temperature tensile testing combined with electron backscatter diffraction (EBSD). Several microstructures, containing different fractions and sizes of lamellar colonies and equiaxed γ-grains, were fabricated by either centrifugal casting or powder metallurgy, followed by heat treatment at 1300 °C and furnace cooling. in situ tensile and tensile-creep experiments were performed in a scanning electron microscope (SEM) at temperatures ranging from 580 °C to 700 °C. EBSD was carried out in selected regions before and after straining. Our results suggest that, during constant strain rate tests, true twin γ/γ interfaces are the weakest barriers to dislocations and, thus, that the relevant length scale might be influenced by the distance between non-true twin boundaries. Under creep conditions both grain/colony boundary sliding (G/CBS) and crystallographic slip are observed to contribute to deformation. The incidence of boundary sliding is particularly high in γ grains of duplex microstructures. The slip activity during creep deformation in different microstructures was evaluated by trace analysis. Special emphasis was placed in distinguishing the compliance of different slip events with the Schmid law with respect to the applied stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical filters are crucial elements in optical communication networks. Their influence toward the optical signal will affect the communication quality seriously. In this paper we will study and simulate the optical signal impairment and crosstalk penalty caused by different kinds of filters, which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (F-P). Signal impairment from filter concatenation effect and crosstalk penalty from out-band and in-band are analyzed from Q-penalty, eye opening penalty (EOP) and optical spectrum. The simulation results show that signal impairment and crosstalk penalty induced by the Butterworth filter is the minimum among these four types of filters. Signal impairment caused by filter concatenation effect shows that when center frequency of all filters is aligned perfectly with the laser's frequency, 12 50-GHz Butterworth filters can be cascaded, with 1-dB EOP. This value is reduced to 9 when the center frequency is misaligned with 5 GHz. In the 50-GHz channel spacing DWDM networks, total Q-penalty induced by a pair of Butterworth filters based demultiplexer and multiplexer is lower than 0.5 dB when the filter bandwidth is in the range of 42-46 GHz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There exists an interest in performing full core pin-by-pin computations for present nuclear reactors. In such type of problems the use of a transport approximation like the diffusion equation requires the introduction of correction parameters. Interface discontinuity factors can improve the diffusion solution to nearly reproduce a transport solution. Nevertheless, calculating accurate pin-by-pin IDF requires the knowledge of the heterogeneous neutron flux distribution, which depends on the boundary conditions of the pin-cell as well as the local variables along the nuclear reactor operation. As a consequence, it is impractical to compute them for each possible configuration. An alternative to generate accurate pin-by-pin interface discontinuity factors is to calculate reference values using zero-net-current boundary conditions and to synthesize afterwards their dependencies on the main neighborhood variables. In such way the factors can be accurately computed during fine-mesh diffusion calculations by correcting the reference values as a function of the actual environment of the pin-cell in the core. In this paper we propose a parameterization of the pin-by-pin interface discontinuity factors allowing the implementation of a cross sections library able to treat the neighborhood effect. First results are presented for typical PWR configurations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indium nitride (InN) has been the subject of intense research in recent years. Some of its most attractive features are its excellent transport properties such as its small band edge electron effective mass, high electron mobilities and peak drift velocities, and high frequency transient drift velocity oscillations [1]. These suggest enormous potential applications for InN in high frequency electronic devices. But to date the high unintentional bulk electron concentration (n~1018 cm-3) of undoped InN samples and the surface electron accumulation layer make it a hard task to create a reliable metalsemiconductor Schottky barrier. Some attempts have been made to overcome this problem by means of material oxidation [2] or deposition of insulators [3]. In this work we present a way to obtain an electrical rectification behaviour by means of heterojunction growth. Due to the big band gap differences among nitride semiconductors, it’s possible to create a structure with high band offsets. In InN/GaN heterojunctions, depending on the GaN doping, the magnitude of conduction and valence band offset are critical parameters which allow distinguishing among different electrical behaviours. The earliest estimate of the valence band offset at an InN–GaN heterojunction in a wurtzite structure was measured to be ~0.85 eV [4], while the Schottky barrier heights were determined to be ~ 1,4 eV [5].We grew In-face InN layer with varying thickness (between 150 nm and 1 mm) by plasma assisted molecular beam epitaxy (PA-MBE) on GaNntemplates (GaN/Al2O3), with temperatures ranging between 300°C and 450°C. The different doping in GaN template (Si doping, Fe doping and Mg doping) results in differences in band alignments of the two semiconductors changing electrical barriers for carriers and consequently electrical conduction behaviour. The processing of the devices includes metallization of the ohmic contacts on InN and GaN, for which we used Ti/Al/Ni/Au. Whereas an ohmic contact on InN is straightforward, the main issue was the fabrication of the contact on GaN due to the very low decomposition temperature of InN. A standard ohmic contact on GaN is generally obtained by high temperature rapid thermal annealing (RTA), typically done between 500ºC and 900ºC[6]. In this case, the limitation due to the presence of In-face InN imposes an upper limit on the temperature for the thermal annealing process and ohmic contact formation of about 450°C. We will present results on the morphology of the InN layers by X-Ray diffraction and SEM, and electrical measurements, in particular current-voltage and capacitance-voltage characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toponomastics is increasingly interested in the subjective role of place names in quotidian life. In the frame of Urban Geography, the interest in this matter is currently growing, as the recently change in modes of habitation has urged our discipline to find new ways of exploring the cities. In this context, the study of how name's significance is connected to a urban society constitutes a very interesting approach. We believe in the importance of place names as tools for decoding urban areas and societies at a local-scale. This consideration has been frequently taken into account in the analysis of exonyms, although in their case they are not exempt of political and practical implications that prevail over the tool function. The study of toponomastic processes helps us understanding how the city works, by analyzing the liaison between urban landscape, imaginaries and toponyms which is reflected in the scarcity of some names, in the biased creation of new toponyms and in the pressure exercised over every place name by tourists, residents and local government for changing, maintaining or eliminating them. Our study-case, Toledo, is one of the oldest cities in Spain, full of myths, stories and histories that can only be understood combined with processes of internal evolution of the city linked to the arrival of new residents and the more and more notorious change of its historical landscape. At a local scale, we are willing to decode the information which is contained in its toponyms about its landscape and its society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semi-arid soils cover a significant area of Earth s land surface and typically contain large amounts of inorganic C. Determining the effects of biochar additions on CO2 emissions fromsemi-arid soils is therefore essential for evaluating the potential of biochar as a climate change mitigation strategy. Here, we measured the CO2 that evolved from semi-arid calcareous soils amended with biochar at rates of 0 and 20 t ha?1 in a full factorial combination with three different fertilizers (mineral fertilizer, municipal solid waste compost, and sewage sludge) applied at four rates (equivalent to 0, 75, 150, and 225 kg potentially available N ha?1) during 182 days of aerobic incubation. A double exponential model, which describes cumulative CO2 emissions from two active soil C compartments with different turnover rates (one relatively stable and the other more labile), was found to fit verywell all the experimental datasets. In general, the organic fertilizers increased the size and decomposition rate of the stable and labile soil C pools. In contrast, biochar addition had no effects on any of the double exponential model parameters and did not interact with the effects ascribed to the type and rate of fertilizer. After 182 days of incubation, soil organic and microbial biomass C contents tended to increase with increasing the application rates of organic fertilizer, especially of compost, whereas increasing the rate of mineral fertilizer tended to suppress microbial biomass. Biochar was found to increase both organic and inorganic C contents in soil and not to interactwith the effects of type and rate of fertilizer on C fractions. As a whole, our results suggest that the use of biochar as enhancer of semi-arid soils, either alone or combined with mineral and organic fertilizers, is unlikely to increase abiotic and biotic soil CO2 emissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arch bridge structural solution has been known for centuries, in fact the simple nature of arch that require low tension and shear strength was an advantage as the simple materials like stone and brick were the only option back in ancient centuries. By the pass of time especially after industrial revolution, the new materials were adopted in construction of arch bridges to reach longer spans. Nowadays one long span arch bridge is made of steel, concrete or combination of these two as "CFST", as the result of using these high strength materials, very long spans can be achieved. The current record for longest arch belongs to Chaotianmen bridge over Yangtze river in China with 552 meters span made of steel and the longest reinforced concrete type is Wanxian bridge which also cross the Yangtze river through a 420 meters span. Today the designer is no longer limited by span length as long as arch bridge is the most applicable solution among other approaches, i.e. cable stayed and suspended bridges are more reasonable if very long span is desired. Like any super structure, the economical and architectural aspects in construction of a bridge is extremely important, in other words, as a narrower bridge has better appearance, it also require smaller volume of material which make the design more economical. Design of such bridge, beside the high strength materials, requires precise structural analysis approaches capable of integrating the combination of material behaviour and complex geometry of structure and various types of loads which may be applied to bridge during its service life. Depend on the design strategy, analysis may only evaluates the linear elastic behaviour of structure or consider the nonlinear properties as well. Although most of structures in the past were designed to act in their elastic range, the rapid increase in computational capacity allow us to consider different sources of nonlinearities in order to achieve a more realistic evaluations where the dynamic behaviour of bridge is important especially in seismic zones where large movements may occur or structure experience P - _ effect during the earthquake. The above mentioned type of analysis is computationally expensive and very time consuming. In recent years, several methods were proposed in order to resolve this problem. Discussion of recent developments on these methods and their application on long span concrete arch bridges is the main goal of this research. Accordingly available long span concrete arch bridges have been studied to gather the critical information about their geometrical aspects and properties of their materials. Based on concluded information, several concrete arch bridges were designed for further studies. The main span of these bridges range from 100 to 400 meters. The Structural analysis methods implemented in in this study are as following: Elastic Analysis: Direct Response History Analysis (DRHA): This method solves the direct equation of motion over time history of applied acceleration or imposed load in linear elastic range. Modal Response History Analysis (MRHA): Similar to DRHA, this method is also based on time history, but the equation of motion is simplified to single degree of freedom system and calculates the response of each mode independently. Performing this analysis require less time than DRHA. Modal Response Spectrum Analysis (MRSA): As it is obvious from its name, this method calculates the peak response of structure for each mode and combine them using modal combination rules based on the introduced spectra of ground motion. This method is expected to be fastest among Elastic analysis. Inelastic Analysis: Nonlinear Response History Analysis (NL-RHA): The most accurate strategy to address significant nonlinearities in structural dynamics is undoubtedly the nonlinear response history analysis which is similar to DRHA but extended to inelastic range by updating the stiffness matrix for every iteration. This onerous task, clearly increase the computational cost especially for unsymmetrical buildings that requires to be analyzed in a full 3D model for taking the torsional effects in to consideration. Modal Pushover Analysis (MPA): The Modal Pushover Analysis is basically the MRHA but extended to inelastic stage. After all, the MRHA cannot solve the system of dynamics because the resisting force fs(u; u_ ) is unknown for inelastic stage. The solution of MPA for this obstacle is using the previously recorded fs to evaluate system of dynamics. Extended Modal Pushover Analysis (EMPA): Expanded Modal pushover is a one of very recent proposed methods which evaluates response of structure under multi-directional excitation using the modal pushover analysis strategy. In one specific mode,the original pushover neglect the contribution of the directions different than characteristic one, this is reasonable in regular symmetric building but a structure with complex shape like long span arch bridges may go through strong modal coupling. This method intend to consider modal coupling while it take same time of computation as MPA. Coupled Nonlinear Static Pushover Analysis (CNSP): The EMPA includes the contribution of non-characteristic direction to the formal MPA procedure. However the static pushovers in EMPA are performed individually for every mode, accordingly the resulted values from different modes can be combined but this is only valid in elastic phase; as soon as any element in structure starts yielding the neutral axis of that section is no longer fixed for both response during the earthquake, meaning the longitudinal deflection unavoidably affect the transverse one or vice versa. To overcome this drawback, the CNSP suggests executing pushover analysis for governing modes of each direction at the same time. This strategy is estimated to be more accurate than MPA and EMPA, moreover the calculation time is reduced because only one pushover analysis is required. Regardless of the strategy, the accuracy of structural analysis is highly dependent on modelling and numerical integration approaches used in evaluation of each method. Therefore the widely used Finite Element Method is implemented in process of all analysis performed in this research. In order to address the study, chapter 2, starts with gathered information about constructed long span arch bridges, this chapter continuous with geometrical and material definition of new models. Chapter 3 provides the detailed information about structural analysis strategies; furthermore the step by step description of procedure of all methods is available in Appendix A. The document ends with the description of results and conclusion of chapter 4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La teoría de reconocimiento y clasificación de patrones y el aprendizaje automático son actualmente áreas de conocimiento en constante desarrollo y con aplicaciones prácticas en múltiples ámbitos de la industria. El propósito de este Proyecto de Fin de Grado es el estudio de las mismas así como la implementación de un sistema software que dé solución a un problema de clasificación de ruido impulsivo, concretamente mediante el desarrollo de un sistema de seguridad basado en la clasificación de eventos sonoros en tiempo real. La solución será integral, comprendiendo todas las fases del proceso, desde la captación de sonido hasta el etiquetado de los eventos registrados, pasando por el procesado digital de señal y la extracción de características. Para su desarrollo se han diferenciado dos partes fundamentales; una primera que comprende la interfaz de usuario y el procesado de la señal de audio donde se desarrollan las labores de monitorización y detección de ruido impulsivo y otra segunda centrada únicamente en la clasificación de los eventos sonoros detectados, definiendo una arquitectura de doble clasificador donde se determina si los eventos detectados son falsas alarmas o amenazas, etiquetándolos como de un tipo concreto en este segundo caso. Los resultados han sido satisfactorios, mostrando una fiabilidad global en el proceso de entorno al 90% a pesar de algunas limitaciones a la hora de construir la base de datos de archivos de audio, lo que prueba que un dispositivo de seguridad basado en el análisis de ruido ambiente podría incluirse en un sistema integral de alarma doméstico aumentando la protección del hogar. ABSTRACT. Pattern classification and machine learning are currently expertise areas under continuous development and also with extensive applications in many business sectors. The aim of this Final Degree Project is to study them as well as the implementation of software to carry on impulsive noise classification tasks, particularly through the development of a security system based on sound events classification. The solution will go over all process stages, from capturing sound to the labelling of the events recorded, without forgetting digital signal processing and feature extraction, everything in real time. In the development of the Project a distinction has been made between two main parts. The first one comprises the user’s interface and the audio signal processing module, where monitoring and impulsive noise detection tasks take place. The second one is focussed in sound events classification tasks, defining a double classifier architecture where it is determined whether detected events are false alarms or threats, labelling them from a concrete category in the latter case. The obtained results have been satisfactory, with an overall reliability of 90% despite some limitations when building the audio files database. This proves that a safety device based on the analysis of environmental noise could be included in a full alarm system increasing home protection standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesis desvela el origen moderno del modo de afrontar el proyecto de arquitectura mediante métodos de ordenación. Estos procedimientos, fieles a la poética que los respalda, establecen unos principios que anteceden y constituyen la base del método y estos son técnicos, funcionales y sociales. Una cartografía de los principios propuestos por los arquitectos y los teóricos de la arquitectura nos aporta el medio de investigación de la tesis, los libros de arquitecto. La intelectualización y conceptualización que conlleva la arquitectura durante el siglo XX, favorecida por la asociación de los arquitectos, los historiadores y los críticos en encuentros y debates, fomentará la aparición de textos en los que el proyecto de arquitectura se contextualice en su entorno. De esta manera se deja de lado la resolución de un proyecto concreto, mediante la elección entre diversas posibilidades contingentes, para establecer que el acto de proyectar constituye un problema abstracto. Esta postura modifica la resolución del proyecto de arquitectura que ahora se acomete como un caso particular a resolver según los principios y métodos propuestos. Los libros de arquitecto se evidencian como el medio privilegiado para exponer los principios y los métodos de organización de estos, posicionándolos en el ambiente cultural y social. Los principios técnica, función y ciudad que fascinan a los arquitectos desde los años veinte, sufren un proceso de puesta en crisis entre el final de la II Guerra Mundial y la crisis del petróleo del año 1973. A partir de los años setenta pierden su vigencia y ya no deslumbran. Quedan relegados a un principio más, que afecta al proyecto de arquitectura, pero no lo determina. Este desplazamiento en vez de debilitarlos hace que se manifiesten en todo su poder creativo. Las herramientas que explicitan estos principios tales como, la seriación, la modulación, el cambio de escala, los métodos de organización jerarquizados o adaptables, las taxonomías, los diagramas y los relatos, pierden su carga de novedad y de certeza, y su poder metafórico alcanzando la contemporaneidad convertidas en una estructura conceptual sobre la que se organizan los proyectos de arquitectura. ABSTRACT This dissertation reveals the modernist origins of approaching architectural design through organizational methods. These procedures, true to the poetics that back them, establish certain principles that precede and constitute the foundations of the method, and they are technical, functional and social. A map of the principles proposed by architects and architecture theorists provides the means of research of this dissertation; architect’s books. The intellectualization and conceptualization regarding architecture during the 20th century, assisted by the association of architects, historians and critics through conferences and debates, encouraged the advent of texts in which the architectural project is contextualized in its surroundings. In this way, the issue of solving a specific design is set aside by choosing between a diverse set of possible contingencies, establishing that the act of designing constitutes an abstract problem. This stance changed the way the architectural project was carried out by becoming a specific case to be worked out according to the principles and methods proposed. Architect’s books become the privileged means to present the principles and organizational methods of architects, positioning them in cultural and social circles. The principles of technology, functionality and urbanity that had fascinated architects since the 1920s, were put into question between the end of World War II and the 1973 oil crisis. After the 1970s these principles were no longer valid and ceased to amaze. But this displacement, instead of debilitating them, made them appear in their full creative force. The tools that assert these principles, such as serial production, modulation, change of scales, hierarchical or adaptable organizational methods, taxonomies, diagrams and narratives, lose their novel and undisputed content as well as their metaphorical power, reaching us, nowadays, turned into a conceptual structure upon which we organize architectural design.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The 8-dimensional Luttinger–Kohn–Pikus–Bir Hamiltonian matrix may be made up of four 4-dimensional blocks. A 4-band Hamiltonian is presented, obtained from making the non-diagonal blocks zero. The parameters of the new Hamiltonian are adjusted to fit the calculated effective masses and strained QD bandgap with the measured ones. The 4-dimensional Hamiltonian thus obtained agrees well with measured quantum efficiency of a quantum dot intermediate band solar cell and the full absorption spectrum can be calculated in about two hours using Mathematica© and a notebook. This is a hundred times faster than with the commonly-used 8-band Hamiltonian and is considered suitable for helping design engineers in the development of nanostructured solar cells.