954 resultados para Non-ideal system


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present work develops a model to simulate the dynamics of a quadcopter being controlled by a PD fuzzy controller. Initially is presented a brief history of quadcopters an introduction to fuzzy logic and fuzzy control systems. Afterwards is presented an overview of the quadcopter dynamics and the mathematical modelling development applying Newton-Euler method. Then the modelling are implemented in a Simulink model in addition to a PD fuzzy controller. A prototype proposition is made, by describing each necessary component to build up a quadcopter. In the end the results from the simulators are discussed and compared due to the discrepancy between the model using ideal sensor and the model using non-ideal sensors

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Física - FEG

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information on the solvation in mixtures of water, W, and the ionic liquids, ILs, 1-allyl-3-R-imidazolium chlorides; R = methyl, 1-butyl, and 1-hexyl, has been obtained from the responses of the following solvatochromic probes: 2,6-dibromo-4-[(E)-2-(1-R-pyridinium-4-yl)ethenyl] phenolate, R = methyl, MePMBr2; 1-octyl, OcPMBr(2), and the corresponding quinolinium derivative, MeQMBr(2). A model developed for solvation in binary mixtures of W and molecular solvents has been extended to the present mixtures. Our objective is to assess the relevance to solvation of hydrogen-bonding and the hydrophobic character of the IL and the solvatochromic probe. Plots of the medium empirical polarity, E-T(probe) versus its composition revealed non-ideal behavior, attributed to preferential solvation by the IL and, more efficiently, by the IL-W hydrogen-bonded complex. The deviation from linearity increases as a function of increasing number of carbon atoms in the alkyl group of the IL, and is larger than that observed for solvation by W plus molecular solvents (1-propanol and 2-(1-butoxy)ethanol) that are more hydrophobic than the ILs investigated. This enhanced deviation is attributed to the more organized structure of the ILs proper, which persists in their aqueous solutions. MeQMBr(2) is more susceptible to solvent lipophilicity than OcPMBr(2), although the former probe is less lipophilic. This enhanced susceptibility agrees with the important effect of annelation on the contributions of the quinonoid and zwitterionic limiting structures to the ground and excited states of the probe, hence on its response to both medium composition and lipophilicity of the IL.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Placas de granitos são amplamente empregadas no revestimento de pisos na forma de ladrilhos assentados com argamassa industrializada. Recentemente, tem crescido o interesse no uso de sistemas não aderentes (sem argamassa) com placas de rocha. Porém, na literatura muito pouco foi publicado a respeito. O presente trabalho avaliou as propriedades físico-mecânicas de três granitos com grande aceitação comercial, e determinou as espessuras necessárias para estas das placas serem usadas como lajes de 200 cm de largura por 300 cm de comprimento, apoiadas pelas quatro extremidades em vigas de concreto. As propriedades testadas para o projeto estrutural das placas de granito foram: resistência à compressão, resistência à flexão por três pontos, módulo de elasticidade e coeficiente de Poisson. Foram também determinados o coeficiente de atrito e a resistência à abrasão profunda para a avaliação do desempenho do piso, de acordo com o uso e o ambiente de exposição,. Os resultados indicaram que o charnockito Verde Labrador deve ser usado com espessura de 30 mm, restrita a ambientes internos. O sienogranito e o monzogranito podem ser empregados em ambientes internos e externos. As placas do monzogranito Cinza Castelo devem possuir espessura de 20 mm, e as do sienogranito Vermelho Brasília devem possuir 30 mm de espessura.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, we investigate mixtures of quantum degenerate Bose and Fermi gases of neutral atoms in threedimensional optical lattices. Feshbach resonances allow to control interspecies interactions in these systems precisely, by preparing suitable combinations of internal atomic states and applying external magnetic fields. This way, the system behaviour can be tuned continuously from mutual transparency to strongly interacting correlated phases, up to the stability boundary.rnThe starting point for these investigations is the spin-polarized fermionic band insulator. The properties of this non-interacting system are fully determined by the Pauli exclusion principle for the occupation of states in the lattice. A striking demonstration of the latter can be found in the antibunching of the density-density correlation of atoms released from the lattice. If bosonic atoms are added to this system, isolated heteronuclear molecules can be formed on the lattice sites via radio-frequency stimulation. The efficiency of this process hints at a modification of the atom number distribution over the lattice caused by interspecies interaction.rnIn the following, we investigate systems with tunable interspecies interaction. To this end, a method is developed which allows to assess the various contributions to the system Hamiltonian both qualitatively and quantitatively by following the quantum phase diffusion of the bosonic matter wave.rnBesides a modification of occupation number statistics, these measurements show a significant renormalization of the bosonic Hubbard parameters. The final part of the thesis considers the implications of this renormalization effect on the many particle physics in the mixture. Here, we demonstrate how the quantum phase transition from a bosonic superfluid to a Mott insulator state is shifted towards considerably shallower lattices due to renormalization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bisher ist bei forensischen Untersuchungen von Explosionen die Rückverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstört wird. Die Rückverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen können somit ausgelesen werden und liefern der Polizei bei der Aufklärung weitere Ansätze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewählten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Möglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Veränderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die Überprüfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) für die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhängigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit höherer Temperaturbeständigkeit in größeren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbeständigkeit in kleineren Partikeln impliziert. Dies lässt sich in Ansätzen durch einen Fraktionierungsprozess in Abhängigkeit der Temperaturstabilität der Lanthanoide oder deren Umsetzungsprodukten erklären. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschließend geklärt werden.rnDie generelle Anwendbarkeit und unter Umständen notwendige, komplementäre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit großer untersuchter Probenfläche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhältnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermöglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss über die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflächen könnte diese jedoch in Folge der in verschiedenen Partikelgrößen resultierenden Fraktionierung zu systematisch verfälschten Ergebnissen führen. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewährleisten, sollte eine Durchführung weiterer Sprengungen auf unterschiedlichen Probenoberflächen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The single electron transistor (SET) is a charge-based device that may complement the dominant metal-oxide-semiconductor field effect transistor (MOSFET) technology. As the cost of scaling MOSFET to smaller dimensions are rising and the the basic functionality of MOSFET is encountering numerous challenges at dimensions smaller than 10nm, the SET has shown the potential to become the next generation device which operates based on the tunneling of electrons. Since the electron transfer mechanism of a SET device is based on the non-dissipative electron tunneling effect, the power consumption of a SET device is extremely low, estimated to be on the order of 10^-18J. The objectives of this research are to demonstrate technologies that would enable the mass produce of SET devices that are operational at room temperature and to integrate these devices on top of an active complementary-MOSFET (CMOS) substrate. To achieve these goals, two fabrication techniques are considered in this work. The Focus Ion Beam (FIB) technique is used to fabricate the islands and the tunnel junctions of the SET device. A Ultra-Violet (UV) light based Nano-Imprint Lithography (NIL) call Step-and-Flash- Imprint Lithography (SFIL) is used to fabricate the interconnections of the SET devices. Combining these two techniques, a full array of SET devices are fabricated on a planar substrate. Test and characterization of the SET devices has shown consistent Coulomb blockade effect, an important single electron characteristic. To realize a room temperature operational SET device that function as a logic device to work along CMOS, it is important to know the device behavior at different temperatures. Based on the theory developed for a single island SET device, a thermal analysis is carried out on the multi-island SET device and the observation of changes in Coulomb blockade effect is presented. The results show that the multi-island SET device operation highly depends on temperature. The important parameters that determine the SET operation is the effective capacitance Ceff and tunneling resistance Rt . These two parameters lead to the tunneling rate of an electron in the SET device, Γ. To obtain an accurate model for SET operation, the effects of the deviation in dimensions, the trap states in the insulation, and the background charge effect have to be taken into consideration. The theoretical and experimental evidence for these non-ideal effects are presented in this work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Plant cell expansion is controlled by a fine-tuned balance between intracellular turgor pressure, cell wall loosening and cell wall biosynthesis. To understand these processes, it is important to gain in-depth knowledge of cell wall mechanics. Pollen tubes are tip-growing cells that provide an ideal system to study mechanical properties at the single cell level. With the available approaches it was not easy to measure important mechanical parameters of pollen tubes, such as the elasticity of the cell wall. We used a cellular force microscope (CFM) to measure the apparent stiffness of lily pollen tubes. In combination with a mechanical model based on the finite element method (FEM), this allowed us to calculate turgor pressure and cell wall elasticity, which we found to be around 0.3 MPa and 20–90 MPa, respectively. Furthermore, and in contrast to previous reports, we showed that the difference in stiffness between the pollen tube tip and the shank can be explained solely by the geometry of the pollen tube. CFM, in combination with an FEM-based model, provides a powerful method to evaluate important mechanical parameters of single, growing cells. Our findings indicate that the cell wall of growing pollen tubes has mechanical properties similar to rubber. This suggests that a fully turgid pollen tube is a relatively stiff, yet flexible cell that can react very quickly to obstacles or attractants by adjusting the direction of growth on its way through the female transmitting tissue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.