957 resultados para Non ideal dynamic system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an analytical solution of a mixed boundary value problem for an unbounded 2D doubly periodic domain which is a model of a composite material with mixed imperfect interface conditions. We find the effective conductivity of the composite material with mixed imperfect interface conditions, and also give numerical analysis of several of their properties such as temperature and flux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : The structural build-up of fresh cement-based materials has a great impact on their structural performance after casting. Accordingly, the mixture design should be tailored to adapt the kinetics of build-up given the application on hand. The rate of structural build-up of cement-based suspensions at rest is a complex phenomenon affected by both physical and chemical structuration processes. The structuration kinetics are strongly dependent on the mixture’s composition, testing parameters, as well as the shear history. Accurate measurements of build-up rely on the efficiency of the applied pre-shear regime to achieve an initial well-dispersed state as well as the applied stress during the liquid-solid transition. Studying the physical and chemical mechanisms of build-up of cement suspensions at rest can enhance the fundamental understanding of this phenomenon. This can, therefore, allow a better control of the rheological and time-dependent properties of cement-based materials. The research focused on the use of dynamic rheology in investigating the kinetics of structural build-up of fresh cement pastes. The research program was conducted in three different phases. The first phase was devoted to evaluating the dispersing efficiency of various disruptive shear techniques. The investigated shearing profiles included rotational, oscillatory, and combination of both. The initial and final states of suspension’s structure, before and after disruption, were determined by applying a small-amplitude oscillatory shear (SAOS). The difference between the viscoelastic values before and after disruption was used to express the degree of dispersion. An efficient technique to disperse concentrated cement suspensions was developed. The second phase aimed to establish a rheometric approach to dissociate and monitor the individual physical and chemical mechanisms of build-up of cement paste. In this regard, the non-destructive dynamic rheometry was used to investigate the evolutions of both storage modulus and phase angle of inert calcium carbonate and cement suspensions. Two independent build-up indices were proposed. The structural build-up of various cement suspensions made with different cement contents, silica fume replacement percentages, and high-range water reducer dosages was evaluated using the proposed indices. These indices were then compared to the well-known thixotropic index (Athix.). Furthermore, the proposed indices were correlated to the decay in lateral pressure determined for various cement pastes cast in a pressure column. The proposed pre-shearing protocol and build-up indices (phases 1 and 2) were then used to investigate the effect of mixture’s parameters on the kinetics of structural build-up in phase 3. The investigated mixture’s parameters included cement content and fineness, alkali sulfate content, and temperature of cement suspension. Zeta potential, calorimetric, spectrometric measurements were performed to explore the corresponding microstructural changes in cement suspensions, such as inter-particle cohesion, rate of Brownian flocculation, and nucleation rate. A model linking the build-up indices and the microstructural characteristics was developed to predict the build-up behaviour of cement-based suspensions The obtained results showed that oscillatory shear may have a greater effect on dispersing concentrated cement suspension than the rotational shear. Furthermore, the increase in induced shear strain was found to enhance the breakdown of suspension’s structure until a critical point, after which thickening effects dominate. An effective dispersing method is then proposed. This consists of applying a rotational shear around the transitional value between the linear and non-linear variations of the apparent viscosity with shear rate, followed by an oscillatory shear at the crossover shear strain and high angular frequency of 100 rad/s. Investigating the evolutions of viscoelastic properties of inert calcite-based and cement suspensions and allowed establishing two independent build-up indices. The first one (the percolation time) can represent the rest time needed to form the elastic network. On the other hand, the second one (rigidification rate) can describe the increase in stress-bearing capacity of formed network due to cement hydration. In addition, results showed that combining the percolation time and the rigidification rate can provide deeper insight into the structuration process of cement suspensions. Furthermore, these indices were found to be well-correlated to the decay in the lateral pressure of cement suspensions. The variations of proposed build-up indices with mixture’s parameters showed that the percolation time is most likely controlled by the frequency of Brownian collisions, distance between dispersed particles, and intensity of cohesion between cement particles. On the other hand, a higher rigidification rate can be secured by increasing the number of contact points per unit volume of paste, nucleation rate of cement hydrates, and intensity of inter-particle cohesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the end of the Cold War, recurring civil conflicts have been the dominant form of violent armed conflict in the world, accounting for 70% of conflicts active between 2000-2013. Duration and intensity of episodes within recurring conflicts in Africa exhibit four behaviors characteristic of archetypal dynamic system structures. The overarching questions asked in this study are whether these patterns are robustly correlated with fundamental concepts of resiliency in dynamic systems that scale from micro-to macro levels; are they consistent with theoretical risk factors and causal mechanisms; and what are the policy implications. Econometric analysis and dynamic systems modeling of 36 conflicts in Africa between 1989 -2014 are combined with process tracing in a case study of Somalia to evaluate correlations between state characteristics, peace operations and foreign aid on the likelihood of observed conflict patterns, test hypothesized causal mechanisms across scales, and develop policy recommendations for increasing human security while decreasing resiliency of belligerents. Findings are that observed conflict patterns scale from micro to macro levels; are strongly correlated with state characteristics that proxy a mix of cooperative (e.g., gender equality) and coercive (e.g., security forces) conflict-balancing mechanisms; and are weakly correlated with UN and regional peace operations and humanitarian aid. Interactions between peace operations and aid interventions that effect conflict persistence at micro levels are not seen in macro level analysis, due to interdependent, micro-level feedback mechanisms, sequencing, and lagged effects. This study finds that the dynamic system structures associated with observed conflict patterns contain tipping points between balancing mechanisms at the interface of micro-macro level interactions that are determined as much by factors related to how intervention policies are designed and implemented, as what they are. Policy implications are that reducing risk of conflict persistence requires that peace operations and aid interventions (1) simultaneously increase transparency, promote inclusivity (with emphasis on gender equality), and empower local civilian involvement in accountability measures at the local levels; (2) build bridges to horizontally and vertically integrate across levels; and (3) pave pathways towards conflict transformation mechanisms and justice that scale from the individual, to community, regional, and national levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis reports on an investigation of the feasibility and usefulness of incorporating dynamic management facilities for managing sensed context data in a distributed contextaware mobile application. The investigation focuses on reducing the work required to integrate new sensed context streams in an existing context aware architecture. Current architectures require integration work for new streams and new contexts that are encountered. This means of operation is acceptable for current fixed architectures. However, as systems become more mobile the number of discoverable streams increases. Without the ability to discover and use these new streams the functionality of any given device will be limited to the streams that it knows how to decode. The integration of new streams requires that the sensed context data be understood by the current application. If the new source provides data of a type that an application currently requires then the new source should be connected to the application without any prior knowledge of the new source. If the type is similar and can be converted then this stream too should be appropriated by the application. Such applications are based on portable devices (phones, PDAs) for semi-autonomous services that use data from sensors connected to the devices, plus data exchanged with other such devices and remote servers. Such applications must handle input from a variety of sensors, refining the data locally and managing its communication from the device in volatile and unpredictable network conditions. The choice to focus on locally connected sensory input allows for the introduction of privacy and access controls. This local control can determine how the information is communicated to others. This investigation focuses on the evaluation of three approaches to sensor data management. The first system is characterised by its static management based on the pre-pended metadata. This was the reference system. Developed for a mobile system, the data was processed based on the attached metadata. The code that performed the processing was static. The second system was developed to move away from the static processing and introduce a greater freedom of handling for the data stream, this resulted in a heavy weight approach. The approach focused on pushing the processing of the data into a number of networked nodes rather than the monolithic design of the previous system. By creating a separate communication channel for the metadata it is possible to be more flexible with the amount and type of data transmitted. The final system pulled the benefits of the other systems together. By providing a small management class that would load a separate handler based on the incoming data, Dynamism was maximised whilst maintaining ease of code understanding. The three systems were then compared to highlight their ability to dynamically manage new sensed context. The evaluation took two approaches, the first is a quantitative analysis of the code to understand the complexity of the relative three systems. This was done by evaluating what changes to the system were involved for the new context. The second approach takes a qualitative view of the work required by the software engineer to reconfigure the systems to provide support for a new data stream. The evaluation highlights the various scenarios in which the three systems are most suited. There is always a trade-o↵ in the development of a system. The three approaches highlight this fact. The creation of a statically bound system can be quick to develop but may need to be completely re-written if the requirements move too far. Alternatively a highly dynamic system may be able to cope with new requirements but the developer time to create such a system may be greater than the creation of several simpler systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo evalúa la relación de causalidad entre la gestión del conocimiento y las capacidades de innovación tecnológica, y el efecto de esta relación sobre los resultados operacionales del sector textil en la ciudad de Medellín. Se empleó la metodología de dinámica de sistemas, con simulación de esce­narios para valorar las condiciones actuales de las organizaciones del sector en términos de acumulación de conocimiento y capacidades. La información se obtuvo mediante entrevistas a expertos y acceso a información especializada del sector. Se evidencia que una mejora de la relación entre la gestión del conocimiento e innovación tecnológica genera un incremento aproximado del 15% en los ingresos operacionales del sector. Asimismo, se encontró que a medida que las variables comunes de interés (Es­trategias organizacionales, canales de comunicación, formación, cultura, acciones de fortalecimiento en I+D), se acercan a los valores deseados, la acumulación de conocimiento y de capacidades de innovación tecnológica alcanzan los valores objetivos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Solar Intensity X-ray and particle Spectrometer (SIXS) on board BepiColombo's Mercury Planetary Orbiter (MPO) will study solar energetic particles moving towards Mercury and solar X-rays on the dayside of Mercury. The SIXS instrument consists of two detector sub-systems; X-ray detector SIXS-X and particle detector SIXS-P. The SIXS-P subdetector will detect solar energetic electrons and protons in a broad energy range using a particle telescope approach with five outer Si detectors around a central CsI(Tl) scintillator. The measurements made by the SIXS instrument are necessary for other instruments on board the spacecraft. SIXS data will be used to study the Solar X-ray corona, solar flares, solar energetic particles, the Hermean magnetosphere, and solar eruptions. The SIXS-P detector was calibrated by comparing experimental measurement data from the instrument with Geant4 simulation data. Calibration curves were produced for the different side detectors and the core scintillator for electrons and protons, respectively. The side detector energy response was found to be linear for both electrons and protons. The core scintillator energy response to protons was found to be non-linear. The core scintillator calibration for electrons was omitted due to insufficient experimental data. The electron and proton acceptance of the SIXS-P detector was determined with Geant4 simulations. Electron and proton energy channels are clean in the main energy range of the instrument. At higher energies, protons and electrons produce non-ideal response in the energy channels. Due to the limited bandwidth of the spacecraft's telemetry, the particle measurements made by SIXS-P have to be pre-processed in the data processing unit of the SIXS instrument. A lookup table was created for the pre-processing of data with Geant4 simulations, and the ability of the lookup table to provide spectral information from a simulated electron event was analysed. The lookup table produces clean electron and proton channels and is able to separate protons and electrons. Based on a simulated solar energetic electron event, the incident electron spectrum cannot be determined from channel particle counts with a standard analysis method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional Si complementary-metal-oxide-semiconductor (CMOS) scaling is fast approaching its limits. The extension of the logic device roadmap for future enhancements in transistor performance requires non-Si materials and new device architectures. III-V materials, due to their superior electron transport properties, are well poised to replace Si as the channel material beyond the 10nm technology node to mitigate the performance loss of Si transistors from further reductions in supply voltage to minimise power dissipation in logic circuits. However several key challenges, including a high quality dielectric/III-V gate stack, a low-resistance source/drain (S/D) technology, heterointegration onto a Si platform and a viable III-V p-metal-oxide-semiconductor field-effect-transistor (MOSFET), need to be addressed before III-Vs can be employed in CMOS. This Thesis specifically addressed the development and demonstration of planar III-V p-MOSFETs, to complement the n-MOSFET, thereby enabling an all III-V CMOS technology to be realised. This work explored the application of InGaAs and InGaSb material systems as the channel, in conjunction with Al2O3/metal gate stacks, for p-MOSFET development based on the buried-channel flatband device architecture. The body of work undertaken comprised material development, process module development and integration into a robust fabrication flow for the demonstration of p-channel devices. The parameter space in the design of the device layer structure, based around the III-V channel/barrier material options of Inx≥0.53Ga1-xAs/In0.52Al0.48As and Inx≥0.1Ga1-xSb/AlSb, was systematically examined to improve hole channel transport. A mobility of 433 cm2/Vs, the highest room temperature hole mobility of any InGaAs quantum-well channel reported to date, was obtained for the In0.85Ga0.15As (2.1% strain) structure. S/D ohmic contacts were developed based on thermally annealed Au/Zn/Au metallisation and validated using transmission line model test structures. The effects of metallisation thickness, diffusion barriers and de-oxidation conditions were examined. Contacts to InGaSb-channel structures were found to be sensitive to de-oxidation conditions. A fabrication process, based on a lithographically-aligned double ohmic patterning approach, was realised for deep submicron gate-to-source/drain gap (Lside) scaling to minimise the access resistance, thereby mitigating the effects of parasitic S/D series resistance on transistor performance. The developed process yielded gaps as small as 20nm. For high-k integration on GaSb, ex-situ ammonium sulphide ((NH4)2S) treatments, in the range 1%-22%, for 10min at 295K were systematically explored for improving the electrical properties of the Al2O3/GaSb interface. Electrical and physical characterisation indicated the 1% treatment to be most effective with interface trap densities in the range of 4 - 10×1012cm-2eV-1 in the lower half of the bandgap. An extended study, comprising additional immersion times at each sulphide concentration, was further undertaken to determine the surface roughness and the etching nature of the treatments on GaSb. A number of p-MOSFETs based on III-V-channels with the most promising hole transport and integration of the developed process modules were successfully demonstrated in this work. Although the non-inverted InGaAs-channel devices showed good current modulation and switch-off characteristics, several aspects of performance were non-ideal; depletion-mode operation, modest drive current (Id,sat=1.14mA/mm), double peaked transconductance (gm=1.06mS/mm), high subthreshold swing (SS=301mV/dec) and high on-resistance (Ron=845kΩ.μm). Despite demonstrating substantial improvement in the on-state metrics of Id,sat (11×), gm (5.5×) and Ron (5.6×), inverted devices did not switch-off. Scaling gate-to-source/drain gap (Lside) from 1μm down to 70nm improved Id,sat (72.4mA/mm) by a factor of 3.6 and gm (25.8mS/mm) by a factor of 4.1 in inverted InGaAs-channel devices. Well-controlled current modulation and good saturation behaviour was observed for InGaSb-channel devices. In the on-state In0.3Ga0.7Sb-channel (Id,sat=49.4mA/mm, gm=12.3mS/mm, Ron=31.7kΩ.μm) and In0.4Ga0.6Sb-channel (Id,sat=38mA/mm, gm=11.9mS/mm, Ron=73.5kΩ.μm) devices outperformed the InGaAs-channel devices. However the devices could not be switched off. These findings indicate that III-V p-MOSFETs based on InGaSb as opposed to InGaAs channels are more suited as the p-channel option for post-Si CMOS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo presenta un nuevo método de identificación para sistemas de fase no mínima basado en la respuesta escalón. El enfoque propuesto provee un modelo aproximado de segundo orden evitando diseños experimentales complejos. El método propuesto es un algoritmo de identificación cerrado basado en puntos característicos de la respuesta escalón de sistemas de fase no mínima de segundo orden. Él es validado usando diferentes modelos lineales. Ellos tienen respuesta inversa entre 3,5% y 38% de la respuesta en régimen permanente. En simulaciones, ha sido demostrado que resultados satisfactorios pueden ser obtenidos usando el procedimiento de identificación propuesto, donde los parámetros identificados presentan errores relativos medios, menores que los obtenidos mediante el método de Balaguer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El sector eléctrico está experimentando cambios importantes tanto a nivel de gestión como a nivel de mercado. Una de las claves que están acelerando este cambio es la penetración cada vez mayor de los Sistemas de Generación Distribuida (DER), que están dando un mayor protagonismo al usuario a la hora de plantear la gestión del sistema eléctrico. La complejidad del escenario que se prevé en un futuro próximo, exige que los equipos de la red tenga la capacidad de interactuar en un sistema mucho más dinámico que en el presente, donde la interfaz de conexión deberá estar dotada de la inteligencia necesaria y capacidad de comunicación para que todo el sistema pueda ser gestionado en su conjunto de manera eficaz. En la actualidad estamos siendo testigos de la transición desde el modelo de sistema eléctrico tradicional hacia un nuevo sistema, activo e inteligente, que se conoce como Smart Grid. En esta tesis se presenta el estudio de un Dispositivo Electrónico Inteligente (IED) orientado a aportar soluciones para las necesidades que la evolución del sistema eléctrico requiere, que sea capaz de integrase en el equipamiento actual y futuro de la red, aportando funcionalidades y por tanto valor añadido a estos sistemas. Para situar las necesidades de estos IED se ha llevado a cabo un amplio estudio de antecedentes, comenzando por analizar la evolución histórica de estos sistemas, las características de la interconexión eléctrica que han de controlar, las diversas funciones y soluciones que deben aportar, llegando finalmente a una revisión del estado del arte actual. Dentro de estos antecedentes, también se lleva a cabo una revisión normativa, a nivel internacional y nacional, necesaria para situarse desde el punto de vista de los distintos requerimientos que deben cumplir estos dispositivos. A continuación se exponen las especificaciones y consideraciones necesarias para su diseño, así como su arquitectura multifuncional. En este punto del trabajo, se proponen algunos enfoques originales en el diseño, relacionados con la arquitectura del IED y cómo deben sincronizarse los datos, dependiendo de la naturaleza de los eventos y las distintas funcionalidades. El desarrollo del sistema continua con el diseño de los diferentes subsistemas que lo componen, donde se presentan algunos algoritmos novedosos, como el enfoque del sistema anti-islanding con detección múltiple ponderada. Diseñada la arquitectura y funciones del IED, se expone el desarrollo de un prototipo basado en una plataforma hardware. Para ello se analizan los requisitos necesarios que debe tener, y se justifica la elección de una plataforma embebida de altas prestaciones que incluye un procesador y una FPGA. El prototipo desarrollado se somete a un protocolo de pruebas de Clase A, según las normas IEC 61000-4-30 e IEC 62586-2, para comprobar la monitorización de parámetros. También se presentan diversas pruebas en las que se han estimado los retardos implicados en los algoritmos relacionados con las protecciones. Finalmente se comenta un escenario de prueba real, dentro del contexto de un proyecto del Plan Nacional de Investigación, donde este prototipo ha sido integrado en un inversor dotándole de la inteligencia necesaria para un futuro contexto Smart Grid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The biofilms microbial forms of association are responsible for generating, accelerating and / or induce the process of corrosion. The damage generated in the petroleum industry for this type of corrosion is significatives, representing major investment for your control. The aim of this study was to evaluate such tests antibiograms the effects of extracts of Jatropha curcas and essential oil of Lippia gracilis Schauer on microrganisms isolated from water samples and, thereafter, select the most effective natural product for further evaluation of biofilms formed in dynamic system. Extracts of J. curcas were not efficient on the complete inhibition of microbial growth in tests type antibiogram, and essential oil of L. gracilis Schauer most effective and determined for the other tests. A standard concentration of essential oil of 20 μL was chosen and established for the evaluation of the biofilms and the rate of corrosion. The biocide effect was determined by microbial counts of five types of microorganisms: aerobic bacteria, precipitating iron, total anaerobic, sulphate reducers (BRS) and fungi. The rate of corrosion was measured by loss of mass. Molecular identification and scanning electron microscopy (SEM) were performed. The data showed reduction to zero of the most probable number (MPN) of bacteria precipitating iron and BRS from 115 and 113 minutes of contact, respectively. There was also inhibited in fungi, reducing to zero the rate of colony-forming units (CFU) from 74 minutes of exposure. However, for aerobic and anaerobic bacteria there was no significant difference in the time of exposure to the essential oil, remaining constant. The rate of corrosion was also influenced by the presence of oil. The essential oil of L. gracilis was shown to be potentially effective

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electrical and optical coupling between subcells in a multijunction solar cell affects its external quantum efficiency (EQE) measurement. In this study, we show how a low breakdown voltage of a component subcell impacts the EQE determination of a multijunction solar cell and demands the use of a finely adjusted external voltage bias. The optimum voltage bias for the EQE measurement of a Ge subcell in two different GaInP/GaInAs/Ge triple-junction solar cells is determined both by sweeping the external voltage bias and by tracing the I–V curve under the same light bias conditions applied during the EQE measurement. It is shown that the I–V curve gives rapid and valuable information about the adequate light and voltage bias needed, and also helps to detect problems associated with non-ideal I–V curves that might affect the EQE measurement. The results also show that, if a non-optimum voltage bias is applied, a measurement artifact can result. Only when the problems associated with a non-ideal I–V curve and/or a low breakdown voltage have been discarded, the measurement artifacts, if any, can be attributed to other effects such as luminescent coupling between subcells.