903 resultados para game design techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Essential biological processes are governed by organized, dynamic interactions between multiple biomolecular systems. Complexes are thus formed to enable the biological function and get dissembled as the process is completed. Examples of such processes include the translation of the messenger RNA into protein by the ribosome, the folding of proteins by chaperonins or the entry of viruses in host cells. Understanding these fundamental processes by characterizing the molecular mechanisms that enable then, would allow the (better) design of therapies and drugs. Such molecular mechanisms may be revealed trough the structural elucidation of the biomolecular assemblies at the core of these processes. Various experimental techniques may be applied to investigate the molecular architecture of biomolecular assemblies. High-resolution techniques, such as X-ray crystallography, may solve the atomic structure of the system, but are typically constrained to biomolecules of reduced flexibility and dimensions. In particular, X-ray crystallography requires the sample to form a three dimensional (3D) crystal lattice which is technically di‑cult, if not impossible, to obtain, especially for large, dynamic systems. Often these techniques solve the structure of the different constituent components within the assembly, but encounter difficulties when investigating the entire system. On the other hand, imaging techniques, such as cryo-electron microscopy (cryo-EM), are able to depict large systems in near-native environment, without requiring the formation of crystals. The structures solved by cryo-EM cover a wide range of resolutions, from very low level of detail where only the overall shape of the system is visible, to high-resolution that approach, but not yet reach, atomic level of detail. In this dissertation, several modeling methods are introduced to either integrate cryo-EM datasets with structural data from X-ray crystallography, or to directly interpret the cryo-EM reconstruction. Such computational techniques were developed with the goal of creating an atomic model for the cryo-EM data. The low-resolution reconstructions lack the level of detail to permit a direct atomic interpretation, i.e. one cannot reliably locate the atoms or amino-acid residues within the structure obtained by cryo-EM. Thereby one needs to consider additional information, for example, structural data from other sources such as X-ray crystallography, in order to enable such a high-resolution interpretation. Modeling techniques are thus developed to integrate the structural data from the different biophysical sources, examples including the work described in the manuscript I and II of this dissertation. At intermediate and high-resolution, cryo-EM reconstructions depict consistent 3D folds such as tubular features which in general correspond to alpha-helices. Such features can be annotated and later on used to build the atomic model of the system, see manuscript III as alternative. Three manuscripts are presented as part of the PhD dissertation, each introducing a computational technique that facilitates the interpretation of cryo-EM reconstructions. The first manuscript is an application paper that describes a heuristics to generate the atomic model for the protein envelope of the Rift Valley fever virus. The second manuscript introduces the evolutionary tabu search strategies to enable the integration of multiple component atomic structures with the cryo-EM map of their assembly. Finally, the third manuscript develops further the latter technique and apply it to annotate consistent 3D patterns in intermediate-resolution cryo-EM reconstructions. The first manuscript, titled An assembly model for Rift Valley fever virus, was submitted for publication in the Journal of Molecular Biology. The cryo-EM structure of the Rift Valley fever virus was previously solved at 27Å-resolution by Dr. Freiberg and collaborators. Such reconstruction shows the overall shape of the virus envelope, yet the reduced level of detail prevents the direct atomic interpretation. High-resolution structures are not yet available for the entire virus nor for the two different component glycoproteins that form its envelope. However, homology models may be generated for these glycoproteins based on similar structures that are available at atomic resolutions. The manuscript presents the steps required to identify an atomic model of the entire virus envelope, based on the low-resolution cryo-EM map of the envelope and the homology models of the two glycoproteins. Starting with the results of the exhaustive search to place the two glycoproteins, the model is built iterative by running multiple multi-body refinements to hierarchically generate models for the different regions of the envelope. The generated atomic model is supported by prior knowledge regarding virus biology and contains valuable information about the molecular architecture of the system. It provides the basis for further investigations seeking to reveal different processes in which the virus is involved such as assembly or fusion. The second manuscript was recently published in the of Journal of Structural Biology (doi:10.1016/j.jsb.2009.12.028) under the title Evolutionary tabu search strategies for the simultaneous registration of multiple atomic structures in cryo-EM reconstructions. This manuscript introduces the evolutionary tabu search strategies applied to enable a multi-body registration. This technique is a hybrid approach that combines a genetic algorithm with a tabu search strategy to promote the proper exploration of the high-dimensional search space. Similar to the Rift Valley fever virus, it is common that the structure of a large multi-component assembly is available at low-resolution from cryo-EM, while high-resolution structures are solved for the different components but lack for the entire system. Evolutionary tabu search strategies enable the building of an atomic model for the entire system by considering simultaneously the different components. Such registration indirectly introduces spatial constrains as all components need to be placed within the assembly, enabling the proper docked in the low-resolution map of the entire assembly. Along with the method description, the manuscript covers the validation, presenting the benefit of the technique in both synthetic and experimental test cases. Such approach successfully docked multiple components up to resolutions of 40Å. The third manuscript is entitled Evolutionary Bidirectional Expansion for the Annotation of Alpha Helices in Electron Cryo-Microscopy Reconstructions and was submitted for publication in the Journal of Structural Biology. The modeling approach described in this manuscript applies the evolutionary tabu search strategies in combination with the bidirectional expansion to annotate secondary structure elements in intermediate resolution cryo-EM reconstructions. In particular, secondary structure elements such as alpha helices show consistent patterns in cryo-EM data, and are visible as rod-like patterns of high density. The evolutionary tabu search strategy is applied to identify the placement of the different alpha helices, while the bidirectional expansion characterizes their length and curvature. The manuscript presents the validation of the approach at resolutions ranging between 6 and 14Å, a level of detail where alpha helices are visible. Up to resolution of 12 Å, the method measures sensitivities between 70-100% as estimated in experimental test cases, i.e. 70-100% of the alpha-helices were correctly predicted in an automatic manner in the experimental data. The three manuscripts presented in this PhD dissertation cover different computation methods for the integration and interpretation of cryo-EM reconstructions. The methods were developed in the molecular modeling software Sculptor (http://sculptor.biomachina.org) and are available for the scientific community interested in the multi-resolution modeling of cryo-EM data. The work spans a wide range of resolution covering multi-body refinement and registration at low-resolution along with annotation of consistent patterns at high-resolution. Such methods are essential for the modeling of cryo-EM data, and may be applied in other fields where similar spatial problems are encountered, such as medical imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to design, synthesize and develop novel transporter targeting agents for image-guided therapy and drug delivery. Two novel agents, N4-guanine (N4amG) and glycopeptide (GP) were synthesized for tumor cell proliferation assessment and cancer theranostic platform, respectively. N4amG and GP were synthesized and radiolabeled with 99mTc and 68Ga. The chemical and radiochemical purities as well as radiochemical stabilities of radiolabeled N4amG and GP were tested. In vitro stability assessment showed both 99mTc-N4amG and 99mTc-GP were stable up to 6 hours, whereas 68Ga-GP was stable up to 2 hours. Cell culture studies confirmed radiolabeled N4amG and GP could penetrate the cell membrane through nucleoside transporters and amino acid transporters, respectively. Up to 40% of intracellular 99mTc-N4amG and 99mTc-GP was found within cell nucleus following 2 hours of incubation. Flow cytometry analysis revealed 99mTc-N4amG was a cell cycle S phase-specific agent. There was a significant difference of the uptake of 99mTc-GP between pre- and post- paclitaxel-treated cells, which suggests that 99mTc-GP may be useful in chemotherapy treatment monitoring. Moreover, radiolabeled N4amG and GP were tested in vivo using tumor-bearing animal models. 99mTc-N4amG showed an increase in tumor-to-muscle count density ratios up to 5 at 4 hour imaging. Both 99mTc-labeled agents showed decreased tumor uptake after paclitaxel treatment. Immunohistochemistry analysis demonstrated that the uptake of 99mTc-N4amG was correlated with Ki-67 expression. Both 99mTc-N4amG and 99mTc-GP could differentiate between tumor and inflammation in animal studies. Furthermore, 68Ga-GP was compared to 18F-FDG in rabbit PET imaging studies. 68Ga-GP had lower tumor standardized uptake values (SUV), but similar uptake dynamics, and different biodistribution compared with 18F-FDG. Finally, to demonstrate that GP can be a potential drug carrier for cancer theranostics, several drugs, including doxorubicin, were selected to be conjugated to GP. Imaging studies demonstrated that tumor uptake of GP-drug conjugates was increased as a function of time. GP-doxorubicin (GP-DOX) showed a slow-release pattern in in vitro cytotoxicity assay and exhibited anti-cancer efficacy with reduced toxicity in in vivo tumor growth delay study. In conclusion, both N4amG and GP are transporter-based targeting agents. Radiolabeled N4amG can be used for tumor cell proliferation assessment. GP is a potential agent for image-guided therapy and drug delivery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some requirements for engineering programmes, such as an ability to use the techniques, skills and modern engineering tools necessary for engineering practice, as well as an understanding of professional and ethical responsibility or an ability to communicate effectively, need new activities designed for measuring students’ progress. Negotiations take place continuously at any stage of a project and, so, the ability of engineers and managers to effectively carry out a negotiation is crucial for the success or failure of projects and businesses. Since it involves communication between individuals motivated to come together in an agreement for mutual benefit, it can be used to enhance these personal abilities. The main objective of this study was to evaluate the adequacy of mixing playing sessions and theory to maximise the students’ strategic vision in combination with negotiating skills. Results show that the combination of playing with theoretical training teaches students to strategise through analysis and discussion of alternatives. The outcome is then more optimised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction of a Gothic vault implied the solution of several technical challenges. The literature on Gothic vault construction is quite large and its growth continues steadily. The main challenge of any structure is that, during and after construction, it must be "safe", that is, it must not collapse. Indeed, it must be amply safe, able to support different loads for long periods of time. Masonry architecture has shown its structural safety for centuries or millennia. The Pantheon of Rome stands today after almost 2,000 years without having needed any structural reinforcement (of course, the survival of any building implies continuous maintenance) . Hagia Sophia in Istanbul, finished in the 6th century AD, has withstood not only the dead loads but also many severe earthquakes . Finally, the Gothic cathedrals, with their appearance of weakness, are• more than a half millennium old. The question arises of what the source of this amazing strength is and how the illiterate master masons were able to design such daring and safe structures . This question is usually evaded in manuals of Gothic architecture. This is quite surprising, the structure being a fundamental part of Gothic buildings. The present article aims to give such an explanation, which has been studied in detail elsewhere. In the first part, the Gothic design methods "V ill be discussed. In the second part, the validity of these methods wi11 be verified within the frame of the modern theory of masonry structures . References have been reduced to a minimum to make the text simpler and more direct.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are commonly designed by specifying and developing hardware and software systems separately. On the contrary, the hardware/software (HW/SW) co-development exploits the trade-offs between hardware and software in a system through their concurrent design. HW/SW Codevelopment techniques take advantage of the flexibility of system design to create architectures that can meet stringent performance requirements with a shorter design cycle. This paper presents the work done within the scope of ESA HWSWCO (Hardware-Software Co-design) study. The main objective of this study has been to address the HW/SW co-design phase to integrate this engineering task as part of the ASSERT process (refer to [1]) and compatible with the existing ASSERT approach, process and tool, Advances in the automation of the design of HW and SW and the adoption of the Model Driven Architecture (MDA) [9] paradigm make possible the definition of a proper integration substrate and enables the continuous interaction of the HW and SW design paths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overhead rigid conductor arrangements for current collection for railway traction have some advantages compared to other, more conventional, energy supply systems. They are simple, robust and easily maintained, not to mention their flexibility as to the required height for installation, which makes them particularly suitable for use in subway infrastructures. Nevertheless, due to the increasing speeds of new vehicles running on modern subway lines, a more efficient design is required for this kind of system. In this paper, the authors present a dynamic analysis of overhead conductor rail systems focused on the design of a new conductor profile with a dynamic behaviour superior to that of the system currently in use. This means that either an increase in running speed can be attained, which at present does not exceed 110 km/h, or an increase in the distance between the rigid catenary supports with the ensuing saving in installation costs. This study has been carried out using simulation techniques. The ANSYS programme has been used for the finite element modelling and the SIMPACK programme for the elastic multibody systems analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overhead rail current collector systems for railway traction offer certain features, such as low installation height and reduced maintenance, which make them predominantly suitable for use in underground train infrastructures. Due to the increased demands of modern catenary systems and higher running speeds of new vehicles, a more capable design of the conductor rail is needed. A new overhead conductor rail has been developed and its design has been patented [13]. Modern simulation and modelling techniques were used in the development approach. The new conductor rail profile has a dynamic behaviour superior to that of the system currently in use. Its innovative design permits either an increase of catenary support spacing or a higher vehicle running speed. Both options ensure savings in installation or operating costs. The simulation model used to optimise the existing conductor rail profile included both a finite element model of the catenary and a three-dimensional multi-body system model of the pantograph. The contact force that appears between pantograph and catenary was obtained in simulation. A sensitivity analysis of the key parameters that influence in catenary dynamics was carried out, finally leading to the improved design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of a novel HCPV nonimaging concentrator with high concentration (>500x) and built-in spectrum splitting concept is presented. It uses the combination of a commercial concentration GaInP/GaInAs/Ge 3J cell and a concentration Back-Point-Contact (BPC) silicon cell for efficient spectral utilization, and external confinement techniques for recovering the 3J cell's reflection. The primary optical element (POE) is a flat Fresnel lens and the secondary optical element (SOE) is a free-form RXI-type concentrator with a band-pass filter embedded in it - Both the POE and SOE performing Köhler integration to produce light homogenization on the receiver. The band-pass filter transmits the IR photons in the 900-1200 nm band to the silicon cell. A design target of an "equivalent" cell efficiency ~46% is predicted using commercial 39% 3J and 26% Si cells. A projected CPV module efficiency of greater than 38% is achievable at a concentration level larger than 500X with a wide acceptance angle of ±1°. A first proof-of concept receiver prototype has been manufactured using a simpler optical architecture (with a lower concentration, ~100x and lower simulated added efficiency), and experimental measurements have shown up to 39.8% 4J receiver efficiency using a 3J cell with a peak efficiency of 36.9%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simultaneous Multiple Surfaces (SMS) was developed as a design method in Nonimaging Optics during the 90s. Later, the method was extended for designing Imaging Optics. We present an overview of the method applied to imaging optics in planar (2D) geometry and compare the results with more classical designs based on achieving aplanatism of different orders. These classical designs are also viewed as particular cases of SMS designs. Systems with up to 4 aspheric surfaces are shown. The SMS design strategy is shown to perform always better than the classical design (in terms of image quality). Moreover, the SMS method is a direct method, i.e., it is not based in multi-parametric optimization techniques. This gives the SMS method an additional interest since it can be used for exploring solutions where the multiparameter techniques can get lost because of the multiple local minima

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geodetic volcano monitoring in Tenerife has mainly focused on the Las Cañadas Caldera, where a geodetic micronetwork and a levelling profile are located. A sensitivity test of this geodetic network showed that it should be extended to cover the whole island for volcano monitoring purposes. Furthermore, InSAR allowed detecting two unexpected movements that were beyond the scope of the traditional geodetic network. These two facts prompted us to design and observe a GPS network covering the whole of Tenerife that was monitored in August 2000. The results obtained were accurate to one centimetre, and confirm one of the deformations, although they were not definitive enough to confirm the second one. Furthermore, new cases of possible subsidence have been detected in areas where InSAR could not be used to measure deformation due to low coherence. A first modelling attempt has been made using a very simple model and its results seem to indicate that the deformation observed and the groundwater level variation in the island may be related. Future observations will be necessary for further validation and to study the time evolution of the displacements, carry out interpretation work using different types of data (gravity, gases, etc) and develop models that represent the island more closely. The results obtained are important because they might affect the geodetic volcano monitoring on the island, which will only be really useful if it is capable of distinguishing between displacements that might be linked to volcanic activity and those produced by other causes. One important result in this work is that a new geodetic monitoring system based on two complementary techniques, InSAR and GPS, has been set up on Tenerife island. This the first time that the whole surface of any of the volcanic Canary Islands has been covered with a single network for this purpose. This research has displayed the need for further similar studies in the Canary Islands, at least on the islands which pose a greater risk of volcanic reactivation, such as Lanzarote and La Palma, where InSAR techniques have been used already.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This work is a contribution to the research and development of the intermediate band solar cell (IBSC), a high efficiency photovoltaic concept that features the advantages of both low and high bandgap solar cells. The resemblance with a low bandgap solar cell comes from the fact that the IBSC hosts an electronic energy band -the intermediate band (IB)- within the semiconductor bandgap. This IB allows the collection of sub-bandgap energy photons by means of two-step photon absorption processes, from the valence band (VB) to the IB and from there to the conduction band (CB). The exploitation of these low energy photons implies a more efficient use of the solar spectrum. The resemblance of the IBSC with a high bandgap solar cell is related to the preservation of the voltage: the open-circuit voltage (VOC) of an IBSC is not limited by any of the sub-bandgaps (involving the IB), but only by the fundamental bandgap (defined from the VB to the CB). Nevertheless, the presence of the IB allows new paths for electronic recombination and the performance of the IBSC is degraded at 1 sun operation conditions. A theoretical argument is presented regarding the need for the use of concentrated illumination in order to circumvent the degradation of the voltage derived from the increase in the recombi¬nation. This theory is supported by the experimental verification carried out with our novel characterization technique consisting of the acquisition of photogenerated current (IL)-VOC pairs under low temperature and concentrated light. Besides, at this stage of the IBSC research, several new IB materials are being engineered and our novel character¬ization tool can be very useful to provide feedback on their capability to perform as real IBSCs, verifying or disregarding the fulfillment of the “voltage preservation” principle. An analytical model has also been developed to assess the potential of quantum-dot (QD)-IBSCs. It is based on the calculation of band alignment of III-V alloyed heterojunc-tions, the estimation of the confined energy levels in a QD and the calculation of the de¬tailed balance efficiency. Several potentially useful QD materials have been identified, such as InAs/AlxGa1-xAs, InAs/GaxIn1-xP, InAs1-yNy/AlAsxSb1-x or InAs1-zNz/Alx[GayIn1-y]1-xP. Finally, a model for the analysis of the series resistance of a concentrator solar cell has also been developed to design and fabricate IBSCs adapted to 1,000 suns. Resumen Este trabajo contribuye a la investigación y al desarrollo de la célula solar de banda intermedia (IBSC), un concepto fotovoltaico de alta eficiencia que auna las ventajas de una célula solar de bajo y de alto gap. La IBSC se parece a una célula solar de bajo gap (o banda prohibida) en que la IBSC alberga una banda de energía -la banda intermedia (IB)-en el seno de la banda prohibida. Esta IB permite colectar fotones de energía inferior a la banda prohibida por medio de procesos de absorción de fotones en dos pasos, de la banda de valencia (VB) a la IB y de allí a la banda de conducción (CB). El aprovechamiento de estos fotones de baja energía conlleva un empleo más eficiente del espectro solar. La semejanza antre la IBSC y una célula solar de alto gap está relacionada con la preservación del voltaje: la tensión de circuito abierto (Vbc) de una IBSC no está limitada por ninguna de las fracciones en las que la IB divide a la banda prohibida, sino que está únicamente limitada por el ancho de banda fundamental del semiconductor (definido entre VB y CB). No obstante, la presencia de la IB posibilita nuevos caminos de recombinación electrónica, lo cual degrada el rendimiento de la IBSC a 1 sol. Este trabajo argumenta de forma teórica la necesidad de emplear luz concentrada para evitar compensar el aumento de la recom¬binación de la IBSC y evitar la degradación del voltage. Lo anterior se ha verificado experimentalmente por medio de nuestra novedosa técnica de caracterización consistente en la adquisicin de pares de corriente fotogenerada (IL)-VOG en concentración y a baja temperatura. En esta etapa de la investigación, se están desarrollando nuevos materiales de IB y nuestra herramienta de caracterizacin está siendo empleada para realimentar el proceso de fabricación, comprobando si los materiales tienen capacidad para operar como verdaderas IBSCs por medio de la verificación del principio de preservación del voltaje. También se ha desarrollado un modelo analítico para evaluar el potencial de IBSCs de puntos cuánticos. Dicho modelo está basado en el cálculo del alineamiento de bandas de energía en heterouniones de aleaciones de materiales III-V, en la estimación de la energía de los niveles confinados en un QD y en el cálculo de la eficiencia de balance detallado. Este modelo ha permitido identificar varios materiales de QDs potencialmente útiles como InAs/AlxGai_xAs, InAs/GaxIni_xP, InAsi_yNy/AlAsxSbi_x ó InAsi_zNz/Alx[GayIni_y]i_xP. Finalmente, también se ha desarrollado un modelado teórico para el análisis de la resistencia serie de una célula solar de concentración. Gracias a dicho modelo se han diseñado y fabricado IBSCs adaptadas a 1.000 soles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization of program executions has been found useful in applications which include education and debugging. However, traditional visualization techniques often fall short of expectations or are altogether inadequate for new programming paradigms, such as Constraint Logic Programming (CLP), whose declarative and operational semantics differ in some crucial ways from those of other paradigms. In particular, traditional ideas regarding flow control and the behavior of data often cannot be lifted in a straightforward way to (C)LP from other families of programming languages. In this paper we discuss techniques for visualizing program execution and data evolution in CLP. We briefly review some previously proposed visualization paradigms, and also propose a number of (to our knowledge) novel ones. The graphical representations have been chosen based on the perceived needs of a programmer trying to analyze the behavior and characteristics of an execution. In particular, we concéntrate on the representation of the program execution behavior (control), the runtime valúes of the variables, and the runtime constraints. Given our interest in visualizing large executions, we also pay attention to abstraction techniques, Le., techniques which are intended to help in reducing the complexity of the visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.