14 resultados para Many-electron Problem

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular beam epitaxy growth of ten-period lattice-matched InAlN/GaN distributed Bragg reflectors (DBRs) with peak reflectivity centered around 400nm is reported including optical and transmission electron microscopy (TEM) measurements [1]. Good periodicity heterostructures with crack-free surfaces were confirmed, but, also a significant residual optical absorption below the bandgap was measured. The TEM characterization ascribes the origin of this problem to polymorfism and planar defects in the GaN layers and to the existence of an In-rich layer at the InAlN/GaN interfaces. In this work, several TEM based techniques have been combined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present, many countries allow citizens or entities to interact with the government outside the telematic environment through a legal representative who is granted powers of representation. However, if the interaction takes place through the Internet, only primitive mechanisms of representation are available, and these are mainly based on non-dynamic offline processes that do not enable quick and easy identity delegation. This paper proposes a system of dynamic delegation of identity between two generic entities that can solve the problem of delegated access to the telematic services provided by public authorities. The solution herein is based on the generation of a delegation token created from a proxy certificate that allows the delegating entity to delegate identity to another on the basis of a subset of its attributes as delegator, while also establishing in the delegation token itself restrictions on the services accessible to the delegated entity and the validity period of delegation. Further, the paper presents the mechanisms needed to either revoke a delegation token or to check whether a delegation token has been revoked. Implications for theory and practice and suggestions for future research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In university studies, it is not unusual for students to drop some of the subjects they have enrolled in for the academic year. They start by not attending lectures, sometimes due to neglect or carelessness, or because they find the subject too difficult, this means that they lose the continuity in the topics that the professor follows. If they try to attend again they discover that they hardly understand anything and become discouraged and so decide to give up attending lectures and study on their own. However some fail to turn up to do their final exams and the failure rate of those who actually do the exams is high. The problem is that this is not only the case with one specific subject, but it is often the same with many subjects. The result is that students arent’s productive enough, wasting time and also prolonging their years of study which entails a great cost for families. Degree courses structured to be conducted and completed in three academic courses, it may in fact take up to an average of six or more academic courses. In this paper, we have studied this problem, which apart from the waste of money and time, produces frustration in the student, who finds that he has not been able to achieve what he had proposed at the beginning of the course. It is quite common, to find students who do not even pass nor 50% of the subjects they had enrolled in for the academic year. If this happens repeatedly to a student, it can be the point when he considers dropping out altogether. This is also a concern for the universities, especially in the early courses. In our experience as professors, we have found that students, who attend lectures regularly and follow the explanations, approach the final exams with confidence and rarely fail the subject. In this proposal we present some techniques and methods carried out to solve in possible, the problem of lack of attendance to lectures. This involves "rewarding students for their assistance and participation in lectures". Rewarding assistance with a "prize" that counts for the final mark on the subject and involving more participation in the development of lectures. We believe that we have to teach students to use the lectures as part of their learning in a non-passive way. We consider the professor's work as fundamental in terms of how to convey the usefulness of these topics explained and the applications that they will have for their professional life in the future. In this way the student see for himself the use and importance of what he is learning. When his participation is required, he will feel more involved and confident participating in the educational system. Finally we present statistical results of studies carried out on different degrees and on different subjects over two consecutive years. In the first year we assessed only the final exams without considering the students attendance, or participation. In the second year, we have applied the techniques and methods proposed here. In addition we have compared the two ways of assessing subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo de la tesis es investigar los beneficios que el atrapamiento de la luz mediante fenómenos difractivos puede suponer para las células solares de silicio cristalino y las de banda intermedia. Ambos tipos de células adolecen de una insuficiente absorción de fotones en alguna región del espectro solar. Las células solares de banda intermedia son teóricamente capaces de alcanzar eficiencias mucho mayores que los dispositivos convencionales (con una sola banda energética prohibida), pero los prototipos actuales se resienten de una absorción muy débil de los fotones con energías menores que la banda prohibida. Del mismo modo, las células solares de silicio cristalino absorben débilmente en el infrarrojo cercano debido al carácter indirecto de su banda prohibida. Se ha prestado mucha atención a este problema durante las últimas décadas, de modo que todas las células solares de silicio cristalino comerciales incorporan alguna forma de atrapamiento de luz. Por razones de economía, en la industria se persigue el uso de obleas cada vez más delgadas, con lo que el atrapamiento de la luz adquiere más importancia. Por tanto aumenta el interés en las estructuras difractivas, ya que podrían suponer una mejora sobre el estado del arte. Se comienza desarrollando un método de cálculo con el que simular células solares equipadas con redes de difracción. En este método, la red de difracción se analiza en el ámbito de la óptica física, mediante análisis riguroso con ondas acopladas (rigorous coupled wave analysis), y el sustrato de la célula solar, ópticamente grueso, se analiza en los términos de la óptica geométrica. El método se ha implementado en ordenador y se ha visto que es eficiente y da resultados en buen acuerdo con métodos diferentes descritos por otros autores. Utilizando el formalismo matricial así derivado, se calcula el límite teórico superior para el aumento de la absorción en células solares mediante el uso de redes de difracción. Este límite se compara con el llamado límite lambertiano del atrapamiento de la luz y con el límite absoluto en sustratos gruesos. Se encuentra que las redes biperiódicas (con geometría hexagonal o rectangular) pueden producir un atrapamiento mucho mejor que las redes uniperiódicas. El límite superior depende mucho del periodo de la red. Para periodos grandes, las redes son en teoría capaces de alcanzar el máximo atrapamiento, pero sólo si las eficiencias de difracción tienen una forma peculiar que parece inalcanzable con las herramientas actuales de diseño. Para periodos similares a la longitud de onda de la luz incidente, las redes de difracción pueden proporcionar atrapamiento por debajo del máximo teórico pero por encima del límite Lambertiano, sin imponer requisitos irrealizables a la forma de las eficiencias de difracción y en un margen de longitudes de onda razonablemente amplio. El método de cálculo desarrollado se usa también para diseñar y optimizar redes de difracción para el atrapamiento de la luz en células solares. La red propuesta consiste en un red hexagonal de pozos cilíndricos excavados en la cara posterior del sustrato absorbente de la célula solar. La red se encapsula en una capa dieléctrica y se cubre con un espejo posterior. Se simula esta estructura para una célula solar de silicio y para una de banda intermedia y puntos cuánticos. Numéricamente, se determinan los valores óptimos del periodo de la red y de la profundidad y las dimensiones laterales de los pozos para ambos tipos de células. Los valores se explican utilizando conceptos físicos sencillos, lo que nos permite extraer conclusiones generales que se pueden aplicar a células de otras tecnologías. Las texturas con redes de difracción se fabrican en sustratos de silicio cristalino mediante litografía por nanoimpresión y ataque con iones reactivos. De los cálculos precedentes, se conoce el periodo óptimo de la red que se toma como una constante de diseño. Los sustratos se procesan para obtener estructuras precursoras de células solares sobre las que se realizan medidas ópticas. Las medidas de reflexión en función de la longitud de onda confirman que las redes cuadradas biperiódicas consiguen mejor atrapamiento que las uniperiódicas. Las estructuras fabricadas se simulan con la herramienta de cálculo descrita en los párrafos precedentes y se obtiene un buen acuerdo entre la medida y los resultados de la simulación. Ésta revela que una fracción significativa de los fotones incidentes son absorbidos en el reflector posterior de aluminio, y por tanto desaprovechados, y que este efecto empeora por la rugosidad del espejo. Se desarrolla un método alternativo para crear la capa dieléctrica que consigue que el reflector se deposite sobre una superficie plana, encontrándose que en las muestras preparadas de esta manera la absorción parásita en el espejo es menor. La siguiente tarea descrita en la tesis es el estudio de la absorción de fotones en puntos cuánticos semiconductores. Con la aproximación de masa efectiva, se calculan los niveles de energía de los estados confinados en puntos cuánticos de InAs/GaAs. Se emplea un método de una y de cuatro bandas para el cálculo de la función de onda de electrones y huecos, respectivamente; en el último caso se utiliza un hamiltoniano empírico. La regla de oro de Fermi permite obtener la intensidad de las transiciones ópticas entre los estados confinados. Se investiga el efecto de las dimensiones del punto cuántico en los niveles de energía y la intensidad de las transiciones y se obtiene que, al disminuir la anchura del punto cuántico respecto a su valor en los prototipos actuales, se puede conseguir una transición más intensa entre el nivel intermedio fundamental y la banda de conducción. Tomando como datos de partida los niveles de energía y las intensidades de las transiciones calculados como se ha explicado, se desarrolla un modelo de equilibrio o balance detallado realista para células solares de puntos cuánticos. Con el modelo se calculan las diferentes corrientes debidas a transiciones ópticas entre los numerosos niveles intermedios y las bandas de conducción y de valencia bajo ciertas condiciones. Se distingue de modelos de equilibrio detallado previos, usados para calcular límites de eficiencia, en que se adoptan suposiciones realistas sobre la absorción de fotones para cada transición. Con este modelo se reproducen datos publicados de eficiencias cuánticas experimentales a diferentes temperaturas con un acuerdo muy bueno. Se muestra que el conocido fenómeno del escape térmico de los puntos cuánticos es de naturaleza fotónica; se debe a los fotones térmicos, que inducen transiciones entre los estados excitados que se encuentran escalonados en energía entre el estado intermedio fundamental y la banda de conducción. En el capítulo final, este modelo realista de equilibrio detallado se combina con el método de simulación de redes de difracción para predecir el efecto que tendría incorporar una red de difracción en una célula solar de banda intermedia y puntos cuánticos. Se ha de optimizar cuidadosamente el periodo de la red para equilibrar el aumento de las diferentes transiciones intermedias, que tienen lugar en serie. Debido a que la absorción en los puntos cuánticos es extremadamente débil, se deduce que el atrapamiento de la luz, por sí solo, no es suficiente para conseguir corrientes apreciables a partir de fotones con energía menor que la banda prohibida en las células con puntos cuánticos. Se requiere una combinación del atrapamiento de la luz con un incremento de la densidad de puntos cuánticos. En el límite radiativo y sin atrapamiento de la luz, se necesitaría que el número de puntos cuánticos de una célula solar se multiplicara por 1000 para superar la eficiencia de una célula de referencia con una sola banda prohibida. En cambio, una célula con red de difracción precisaría un incremento del número de puntos en un factor 10 a 100, dependiendo del nivel de la absorción parásita en el reflector posterior. Abstract The purpose of this thesis is to investigate the benefits that diffractive light trapping can offer to quantum dot intermediate band solar cells and crystalline silicon solar cells. Both solar cell technologies suffer from incomplete photon absorption in some part of the solar spectrum. Quantum dot intermediate band solar cells are theoretically capable of achieving much higher efficiencies than conventional single-gap devices. Present prototypes suffer from extremely weak absorption of subbandgap photons in the quantum dots. This problem has received little attention so far, yet it is a serious barrier to the technology approaching its theoretical efficiency limit. Crystalline silicon solar cells absorb weakly in the near infrared due to their indirect bandgap. This problem has received much attention over recent decades, and all commercial crystalline silicon solar cells employ some form of light trapping. With the industry moving toward thinner and thinner wafers, light trapping is becoming of greater importance and diffractive structures may offer an improvement over the state-of-the-art. We begin by constructing a computational method with which to simulate solar cells equipped with diffraction grating textures. The method employs a wave-optical treatment of the diffraction grating, via rigorous coupled wave analysis, with a geometric-optical treatment of the thick solar cell bulk. These are combined using a steady-state matrix formalism. The method has been implemented computationally, and is found to be efficient and to give results in good agreement with alternative methods from other authors. The theoretical upper limit to absorption enhancement in solar cells using diffractions gratings is calculated using the matrix formalism derived in the previous task. This limit is compared to the so-called Lambertian limit for light trapping with isotropic scatterers, and to the absolute upper limit to light trapping in bulk absorbers. It is found that bi-periodic gratings (square or hexagonal geometry) are capable of offering much better light trapping than uni-periodic line gratings. The upper limit depends strongly on the grating period. For large periods, diffraction gratings are theoretically able to offer light trapping at the absolute upper limit, but only if the scattering efficiencies have a particular form, which is deemed to be beyond present design capabilities. For periods similar to the incident wavelength, diffraction gratings can offer light trapping below the absolute limit but above the Lambertian limit without placing unrealistic demands on the exact form of the scattering efficiencies. This is possible for a reasonably broad wavelength range. The computational method is used to design and optimise diffraction gratings for light trapping in solar cells. The proposed diffraction grating consists of a hexagonal lattice of cylindrical wells etched into the rear of the bulk solar cell absorber. This is encapsulated in a dielectric buffer layer, and capped with a rear reflector. Simulations are made of this grating profile applied to a crystalline silicon solar cell and to a quantum dot intermediate band solar cell. The grating period, well depth, and lateral well dimensions are optimised numerically for both solar cell types. This yields the optimum parameters to be used in fabrication of grating equipped solar cells. The optimum parameters are explained using simple physical concepts, allowing us to make more general statements that can be applied to other solar cell technologies. Diffraction grating textures are fabricated on crystalline silicon substrates using nano-imprint lithography and reactive ion etching. The optimum grating period from the previous task has been used as a design parameter. The substrates have been processed into solar cell precursors for optical measurements. Reflection spectroscopy measurements confirm that bi-periodic square gratings offer better absorption enhancement than uni-periodic line gratings. The fabricated structures have been simulated with the previously developed computation tool, with good agreement between measurement and simulation results. The simulations reveal that a significant amount of the incident photons are absorbed parasitically in the rear reflector, and that this is exacerbated by the non-planarity of the rear reflector. An alternative method of depositing the dielectric buffer layer was developed, which leaves a planar surface onto which the reflector is deposited. It was found that samples prepared in this way suffered less from parasitic reflector absorption. The next task described in the thesis is the study of photon absorption in semiconductor quantum dots. The bound-state energy levels of in InAs/GaAs quantum dots is calculated using the effective mass approximation. A one- and four- band method is applied to the calculation of electron and hole wavefunctions respectively, with an empirical Hamiltonian being employed in the latter case. The strength of optical transitions between the bound states is calculated using the Fermi golden rule. The effect of the quantum dot dimensions on the energy levels and transition strengths is investigated. It is found that a strong direct transition between the ground intermediate state and the conduction band can be promoted by decreasing the quantum dot width from its value in present prototypes. This has the added benefit of reducing the ladder of excited states between the ground state and the conduction band, which may help to reduce thermal escape of electrons from quantum dots: an undesirable phenomenon from the point of view of the open circuit voltage of an intermediate band solar cell. A realistic detailed balance model is developed for quantum dot solar cells, which uses as input the energy levels and transition strengths calculated in the previous task. The model calculates the transition currents between the many intermediate levels and the valence and conduction bands under a given set of conditions. It is distinct from previous idealised detailed balance models, which are used to calculate limiting efficiencies, since it makes realistic assumptions about photon absorption by each transition. The model is used to reproduce published experimental quantum efficiency results at different temperatures, with quite good agreement. The much-studied phenomenon of thermal escape from quantum dots is found to be photonic; it is due to thermal photons, which induce transitions between the ladder of excited states between the ground intermediate state and the conduction band. In the final chapter, the realistic detailed balance model is combined with the diffraction grating simulation method to predict the effect of incorporating a diffraction grating into a quantum dot intermediate band solar cell. Careful optimisation of the grating period is made to balance the enhancement given to the different intermediate transitions, which occur in series. Due to the extremely weak absorption in the quantum dots, it is found that light trapping alone is not sufficient to achieve high subbandgap currents in quantum dot solar cells. Instead, a combination of light trapping and increased quantum dot density is required. Within the radiative limit, a quantum dot solar cell with no light trapping requires a 1000 fold increase in the number of quantum dots to supersede the efficiency of a single-gap reference cell. A quantum dot solar cell equipped with a diffraction grating requires between a 10 and 100 fold increase in the number of quantum dots, depending on the level of parasitic absorption in the rear reflector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo principal del presente trabajo es estudiar y explotar estructuras que presentan un gas bidimensional de electrones (2DEG) basadas en compuestos nitruros con alto contenido de indio. Existen muchas preguntas abiertas, relacionadas con el nitruro de indio y sus aleaciones, algunas de las cuales se han abordado en este estudio. En particular, se han investigado temas relacionados con el análisis y la tecnología del material, tanto para el InN y heteroestructuras de InAl(Ga)N/GaN como para sus aplicaciones a dispositivos avanzados. Después de un análisis de la dependencia de las propiedades del InN con respecto a tratamientos de procesado de dispositivos (plasma y térmicos), el problema relacionado con la formación de un contacto rectificador es considerado. Concretamente, su dificultad es debida a la presencia de acumulación de electrones superficiales en la forma de un gas bidimensional de electrones, debido al pinning del nivel de Fermi. El uso de métodos electroquímicos, comparados con técnicas propias de la microelectrónica, ha ayudado para la realización de esta tarea. En particular, se ha conseguido lamodulación de la acumulación de electrones con éxito. En heteroestructuras como InAl(Ga)N/GaN, el gas bidimensional está presente en la intercara entre GaN y InAl(Ga)N, aunque no haya polarización externa (estructuras modo on). La tecnología relacionada con la fabricación de transistores de alta movilidad en modo off (E-mode) es investigada. Se utiliza un método de ataque húmedo mediante una solución de contenido alcalino, estudiando las modificaciones estructurales que sufre la barrera. En este sentido, la necesidad de un control preciso sobre el material atacado es fundamental para obtener una estructura recessed para aplicaciones a transistores, con densidad de defectos e inhomogeneidad mínimos. La dependencia de la velocidad de ataque de las propiedades de las muestras antes del tratamiento es observada y comentada. Se presentan también investigaciones relacionadas con las propiedades básicas del InN. Gracias al uso de una puerta a través de un electrolito, el desplazamiento de los picos obtenidos por espectroscopia Raman es correlacionado con una variación de la densidad de electrones superficiales. En lo que concierne la aplicación a dispositivos, debido al estado de la tecnología actual y a la calidad del material InN, todavía no apto para dispositivos, la tesis se enfoca a la aplicación de heteroestructuras de InAl(Ga)N/GaN. Gracias a las ventajas de una barrera muy fina, comparada con la tecnología de AlGaN/GaN, el uso de esta estructura es adecuado para aplicaciones que requieren una elevada sensibilidad, estando el canal 2DEG más cerca de la superficie. De hecho, la sensibilidad obtenida en sensores de pH es comparable al estado del arte en términos de variaciones de potencial superficial, y, debido al poco espesor de la barrera, la variación de la corriente con el pH puede ser medida sin necesidad de un electrodo de referencia externo. Además, estructuras fotoconductivas basadas en un gas bidimensional presentan alta ganancia debida al elevado campo eléctrico en la intercara, que induce una elevada fuerza de separación entre hueco y electrón generados por absorción de luz. El uso de metalizaciones de tipo Schottky (fotodiodos Schottky y metal-semiconductormetal) reduce la corriente de oscuridad, en comparación con los fotoconductores. Además, la barrera delgada aumenta la eficiencia de extracción de los portadores. En consecuencia, se obtiene ganancia en todos los dispositivos analizados basados en heteroestructuras de InAl(Ga)N/GaN. Aunque presentando fotoconductividad persistente (PPC), los dispositivos resultan más rápidos con respeto a los valores que se dan en la literatura acerca de PPC en sistemas fotoconductivos. ABSTRACT The main objective of the present work is to study and exploit the two-dimensionalelectron- gas (2DEG) structures based on In-related nitride compounds. Many open questions are analyzed. In particular, technology and material-related topics are the focus of interest regarding both InNmaterial and InAl(Ga)N/GaNheterostructures (HSs) as well as their application to advanced devices. After the analysis of the dependence of InN properties on processing treatments (plasma-based and thermal), the problemof electrical blocking behaviour is taken into consideration. In particular its difficulty is due to the presence of a surface electron accumulation (SEA) in the form of a 2DEG, due to Fermi level pinning. The use of electrochemical methods, compared to standard microelectronic techniques, helped in the successful realization of this task. In particular, reversible modulation of SEA is accomplished. In heterostructures such as InAl(Ga)N/GaN, the 2DEGis present at the interface between GaN and InAl(Ga)N even without an external bias (normally-on structures). The technology related to the fabrication of normally off (E-mode) high-electron-mobility transistors (HEMTs) is investigated in heterostructures. An alkali-based wet-etching method is analysed, standing out the structural modifications the barrier underwent. The need of a precise control of the etched material is crucial, in this sense, to obtain a recessed structure for HEMT application with the lowest defect density and inhomogeneity. The dependence of the etch rate on the as-grown properties is observed and commented. Fundamental investigation related to InNis presented, related to the physics of this degeneratematerial. With the help of electrolyte gating (EG), the shift in Raman peaks is correlated to a variation in surface eletron density. As far as the application to device is concerned, due to the actual state of the technology and material quality of InN, not suitable for working devices yet, the focus is directed to the applications of InAl(Ga)N/GaN HSs. Due to the advantages of a very thin barrier layer, compared to standard AlGaN/GaN technology, the use of this structure is suitable for high sensitivity applications being the 2DEG channel closer to the surface. In fact, pH sensitivity obtained is comparable to the state-of-the-art in terms of surface potential variations, and, due to the ultrathin barrier, the current variation with pH can be recorded with no need of the external reference electrode. Moreover, 2DEG photoconductive structures present a high photoconductive gain duemostly to the high electric field at the interface,and hence a high separation strength of photogenerated electron and hole. The use of Schottky metallizations (Schottky photodiode and metal-semiconductor-metal) reduce the dark current, compared to photoconduction, and the thin barrier helps to increase the extraction efficiency. Gain is obtained in all the device structures investigated. The devices, even if they present persistent photoconductivity (PPC), resulted faster than the standard PPC related decay values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of parameterizing approximately algebraic curves and surfaces is an active research field, with many implications in practical applications. The problem can be treated locally or globally. We formally state the problem, in its global version for the case of algebraic curves (planar or spatial), and we report on some algorithms approaching it, as well as on the associated error distance analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In tethered satellite technology, it is important to estimate how many electrons a spacecraft can collect from its ambient plasma by a bare electrodynamic tether. The analysis is however very difficult because of the small but significant Geo-magnetic field and the spacecraft’s relative motion to both ions and electrons. The object of our work is the development of a numerical method, for this purpose. Particle-In-Cell (PIC) method, for the calculation of electron current to a positive bare tether moving at orbital velocity in the ionosphere, i.e. in a flowing magnetized plasma under Maxwellian collisionless conditions. In a PIC code, a number of particles are distributed in phase space and the computational domain has a grid on which Poisson equation is solved for field quantities. The code uses the quasi-neutrality condition to solve for the local potential at points in the plasma which coincide with the computational outside boundary. The quasi-neutrality condition imposes ne - ni on the boundary. The Poisson equation is solved in such a way that the presheath region can be captured in the computation. Results show that the collected current is higher than the Orbital Motion Limit (OML) theory. The OML current is the upper limit of current collection under steady collisionless unmagnetized conditions. In this work, we focus on the flowing effects of plasma as a possible cause of the current enhancement. A deficit electron density due to the flowing effects has been worked and removed by introducing adiabatic electron trapping into our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inbound logistic for feeding the workstation inside the factory represents a critical issue in the car manufacturing industry. Nowadays, this issue is even more critical than in the past since more types of car are being produced in the assembly lines. Consequently, as workstations have to install many types of components, they also need to have an inventory of different types of the component in a compact space. The replenishment is a critical issue since a lack of inventory could cause line stoppage or reworking. On the other hand, an excess of inventory could increase the holding cost or even block the replenishment paths. The decision of the replenishment routes cannot be made without taking into consideration the inventory needed by each station during the production time which will depend on the production sequence. This problem deals with medium-sized instances and it is solved using online solvers. The contribution of this paper is a MILP for the replenishment and inventory of the components in a car assembly line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The appearance of large geolocated communication datasets has recently increased our understanding of how social networks relate to their physical space. However, many recurrently reported properties, such as the spatial clustering of network communities, have not yet been systematically tested at different scales. In this work we analyze the social network structure of over 25 million phone users from three countries at three different scales: country, provinces and cities. We consistently find that this last urban scenario presents significant differences to common knowledge about social networks. First, the emergence of a giant component in the network seems to be controlled by whether or not the network spans over the entire urban border, almost independently of the population or geographic extension of the city. Second, urban communities are much less geographically clustered than expected. These two findings shed new light on the widely-studied searchability in self-organized networks. By exhaustive simulation of decentralized search strategies we conclude that urban networks are searchable not through geographical proximity as their country-wide counterparts, but through an homophily-driven community structure.