944 resultados para non-ideal system
Resumo:
Bisher ist bei forensischen Untersuchungen von Explosionen die Rückverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstört wird. Die Rückverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen können somit ausgelesen werden und liefern der Polizei bei der Aufklärung weitere Ansätze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewählten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Möglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Veränderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die Überprüfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) für die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhängigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit höherer Temperaturbeständigkeit in größeren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbeständigkeit in kleineren Partikeln impliziert. Dies lässt sich in Ansätzen durch einen Fraktionierungsprozess in Abhängigkeit der Temperaturstabilität der Lanthanoide oder deren Umsetzungsprodukten erklären. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschließend geklärt werden.rnDie generelle Anwendbarkeit und unter Umständen notwendige, komplementäre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit großer untersuchter Probenfläche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhältnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermöglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss über die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflächen könnte diese jedoch in Folge der in verschiedenen Partikelgrößen resultierenden Fraktionierung zu systematisch verfälschten Ergebnissen führen. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewährleisten, sollte eine Durchführung weiterer Sprengungen auf unterschiedlichen Probenoberflächen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn
Resumo:
Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.
Resumo:
The single electron transistor (SET) is a charge-based device that may complement the dominant metal-oxide-semiconductor field effect transistor (MOSFET) technology. As the cost of scaling MOSFET to smaller dimensions are rising and the the basic functionality of MOSFET is encountering numerous challenges at dimensions smaller than 10nm, the SET has shown the potential to become the next generation device which operates based on the tunneling of electrons. Since the electron transfer mechanism of a SET device is based on the non-dissipative electron tunneling effect, the power consumption of a SET device is extremely low, estimated to be on the order of 10^-18J. The objectives of this research are to demonstrate technologies that would enable the mass produce of SET devices that are operational at room temperature and to integrate these devices on top of an active complementary-MOSFET (CMOS) substrate. To achieve these goals, two fabrication techniques are considered in this work. The Focus Ion Beam (FIB) technique is used to fabricate the islands and the tunnel junctions of the SET device. A Ultra-Violet (UV) light based Nano-Imprint Lithography (NIL) call Step-and-Flash- Imprint Lithography (SFIL) is used to fabricate the interconnections of the SET devices. Combining these two techniques, a full array of SET devices are fabricated on a planar substrate. Test and characterization of the SET devices has shown consistent Coulomb blockade effect, an important single electron characteristic. To realize a room temperature operational SET device that function as a logic device to work along CMOS, it is important to know the device behavior at different temperatures. Based on the theory developed for a single island SET device, a thermal analysis is carried out on the multi-island SET device and the observation of changes in Coulomb blockade effect is presented. The results show that the multi-island SET device operation highly depends on temperature. The important parameters that determine the SET operation is the effective capacitance Ceff and tunneling resistance Rt . These two parameters lead to the tunneling rate of an electron in the SET device, Γ. To obtain an accurate model for SET operation, the effects of the deviation in dimensions, the trap states in the insulation, and the background charge effect have to be taken into consideration. The theoretical and experimental evidence for these non-ideal effects are presented in this work.
Resumo:
Plant cell expansion is controlled by a fine-tuned balance between intracellular turgor pressure, cell wall loosening and cell wall biosynthesis. To understand these processes, it is important to gain in-depth knowledge of cell wall mechanics. Pollen tubes are tip-growing cells that provide an ideal system to study mechanical properties at the single cell level. With the available approaches it was not easy to measure important mechanical parameters of pollen tubes, such as the elasticity of the cell wall. We used a cellular force microscope (CFM) to measure the apparent stiffness of lily pollen tubes. In combination with a mechanical model based on the finite element method (FEM), this allowed us to calculate turgor pressure and cell wall elasticity, which we found to be around 0.3 MPa and 20–90 MPa, respectively. Furthermore, and in contrast to previous reports, we showed that the difference in stiffness between the pollen tube tip and the shank can be explained solely by the geometry of the pollen tube. CFM, in combination with an FEM-based model, provides a powerful method to evaluate important mechanical parameters of single, growing cells. Our findings indicate that the cell wall of growing pollen tubes has mechanical properties similar to rubber. This suggests that a fully turgid pollen tube is a relatively stiff, yet flexible cell that can react very quickly to obstacles or attractants by adjusting the direction of growth on its way through the female transmitting tissue.
Resumo:
We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.
Resumo:
Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico
Resumo:
Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico
Resumo:
Este trabajo examina los marcos epistémicos y su modus operandi sobre los modelos explicativos en la psicología del desarrollo. En primer lugar, los rasgos del marco epistémico de la escisión y su intervención en el modo ?legítimo? de explicar las competencias, habilidades y funciones psicológicas. En segundo lugar, se muestran las dificultades de dicho modelo, aún predominante entre los investigadores, cuando se pretende dar cuenta de la emergencia de comportamientos, sistemas conceptuales y funciones que son estrictamente novedosas, esto es, que no están dadas dentro del aparato mental ni fuera del propio proceso. En tercer lugar, se exploran las características de una explicación sistémica para dichas novedades, enmarcándola en una ontología y una epistemología relacional. Se identifican las características del modelo sistémico y se evalúa su eficacia respecto de la emergencia de sistemas conceptuales y funciones psicológicas nuevas en el desarrollo. Finalmente, se identifican dos versiones de dicha explicación: el sistema complejo del constructivismo y la perspectiva vigotskyana de la explicación, estableciendo una fuerte diferencia respecto de un sistema de interacciones no dialéctico
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
To date, big data applications have focused on the store-and-process paradigm. In this paper we describe an initiative to deal with big data applications for continuous streams of events. In many emerging applications, the volume of data being streamed is so large that the traditional ‘store-then-process’ paradigm is either not suitable or too inefficient. Moreover, soft-real time requirements might severely limit the engineering solutions. Many scenarios fit this description. In network security for cloud data centres, for instance, very high volumes of IP packets and events from sensors at firewalls, network switches and routers and servers need to be analyzed and should detect attacks in minimal time, in order to limit the effect of the malicious activity over the IT infrastructure. Similarly, in the fraud department of a credit card company, payment requests should be processed online and need to be processed as quickly as possible in order to provide meaningful results in real-time. An ideal system would detect fraud during the authorization process that lasts hundreds of milliseconds and deny the payment authorization, minimizing the damage to the user and the credit card company.
Resumo:
Fiber optic sensors have some advantages in subjects related with electrical current and magnetic field measurement. In spite of the optical fiber utilization advantages we have to take into account undesirable effects, which are present in real non-ideal optical fibers. In telecommunication and sensor application fields the presence of inherent and induced birefringence is crucial. The presence of birefringence may cause an undesirable change in the polarization state. In order to compensate the linear birefringence a promising method has been chosen. This method employs orthogonal polarization conjugation in the back propagation direction of the light wave in the fiber. A study and a simulation of an experimental setup are realized with the advantage of a significant sensitivity improvement.
Resumo:
The previous publications (Miñano et al, 2011) have shown that using a Spherical Geodesic Waveguide (SGW), it can be achieved the super-resolution up to ? /500 close to a set of discrete frequencies. These frequencies are directly connected with the well-known Schumann resonance frequencies of spherical symmetric systems. However, the Spherical Geodesic Waveguide (SGW) has been presented as an ideal system, in which the technological obstacles or manufacturing feasibility and their influence on final results were not taken into account. In order to prove the concept of superresolution experimentally, the Spherical Geodesic Waveguide is modified according to the manufacturing requirements and technological limitations. Each manufacturing process imposes some imperfections which can affect the experimental results. Here, we analyze the influence of the manufacturing limitations on the super-resolution properties of the SGW. Beside the theoretical work, herein, there has been presented the experimental results, as well.
Resumo:
Fiber optic sensors have some advantages in subjects related with electrical current and magnetic field measurement. In spite of the optical fiber utilization advantages we have to take into account undesirable effects, which are present in real non-ideal optical fibers. In telecommunication and sensor application fields the presence of inherent and induced birefringence is crucial. The presence of birefringence may cause an undesirable change in the polarization state. In order to compensate the linear birefringence a promising method has been chosen. This method employs orthogonal polarization conjugation in the back propagation direction of the light wave in the fiber. A study and a simulation of an experimental setup are realized with the advantage of a significant sensitivity improvement.
Resumo:
The main problem of pedestrian dead-reckoning (PDR) using only a body-attached inertial measurement unit is the accumulation of heading errors. The heading provided by magnetometers in indoor buildings is in general not reliable and therefore it is commonly not used. Recently, a new method was proposed called heuristic drift elimination (HDE) that minimises the heading error when navigating in buildings. It assumes that the majority of buildings have their corridors parallel to each other, or they intersect at right angles, and consequently most of the time the person walks along a straight path with a heading constrained to one of the four possible directions. In this article we study the performance of HDE-based methods in complex buildings, i.e. with pathways also oriented at 45°, long curved corridors, and wide areas where non-oriented motion is possible. We explain how the performance of the original HDE method can be deteriorated in complex buildings, and also, how severe errors can appear in the case of false matches with the building's dominant directions. Although magnetic compassing indoors has a chaotic behaviour, in this article we analyse large data-sets in order to study the potential use that magnetic compassing has to estimate the absolute yaw angle of a walking person. Apart from these analysis, this article also proposes an improved HDE method called Magnetically-aided Improved Heuristic Drift Elimination (MiHDE), that is implemented over a PDR framework that uses foot-mounted inertial navigation with an extended Kalman filter (EKF). The EKF is fed with the MiHDE-estimated orientation error, gyro bias corrections, as well as the confidence over that corrections. We experimentally evaluated the performance of the proposed MiHDE-based PDR method, comparing it with the original HDE implementation. Results show that both methods perform very well in ideal orthogonal narrow-corridor buildings, and MiHDE outperforms HDE for non-ideal trajectories (e.g. curved paths) and also makes it robust against potential false dominant direction matchings.
Resumo:
A non-local gradient-based damage formulation within a geometrically non-linear setting is presented. The hyperelastic constitutive response at local material point level is governed by a strain energy which is additively composed of an isotropic matrix and of an anisotropic fibre-reinforced material, respectively. The inelastic constitutive response is governed by a scalar [1–d]-type damage formulation, where only the anisotropic elastic part is assumed to be affected by the damage. Following the concept in Dimitrijević and Hackl [28], the local free energy function is enhanced by a gradient-term. This term essentially contains the gradient of the non-local damage variable which, itself, is introduced as an additional independent variable. In order to guarantee the equivalence between the local and non-local damage variable, a penalisation term is incorporated within the free energy function. Based on the principle of minimum total potential energy, a coupled system of Euler–Lagrange equations, i.e., the balance of linear momentum and the balance of the non-local damage field, is obtained and solved in weak form. The resulting coupled, highly non-linear system of equations is symmetric and can conveniently be solved by a standard incremental-iterative Newton–Raphson-type solution scheme. Several three-dimensional displacement- and force-driven boundary value problems—partially motivated by biomechanical application—highlight the mesh-objective characteristics and constitutive properties of the model and illustratively underline the capabilities of the formulation proposed