932 resultados para Simulation and Modeling


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cuando la separación física entre el sistema local y remoto es relativamente corta, el retardo no es perceptible; sin embargo, cuando el manipulador local y el manipulador remoto se encuentran a una distancia lejana uno del otro, el retardo de tiempo ya no es insignificante e influye negativamente en la realización de la tarea. El retardo de tiempo en un sistema de control introduce un atraso de fase que a su vez degrada el rendimiento del sistema y puede causar inestabilidad. Los sistemas de teleoperación pueden sacar provecho de la posibilidad de estar presente en dos lugares simultáneamente, sin embargo, el uso de Internet y otras redes de conmutación de paquetes, tales como Internet2, impone retardos de tiempo variables, haciendo que los esquemas de control ya establecidos elaboren soluciones para hacer frente a inestabilidades causadas por estos retardos de tiempo variables. En este trabajo de tesis se presenta el modelado y análisis de un sistema de teloperación bilateral no lineal de n grados de libertad controlado por convergencia de estado. La comunicación entre el sitio local y remoto se realiza mediante un canal de comunicación con retardo de tiempo. El análisis presentado en este trabajo considera que el retardo puede ser constante o variable. Los principales objetivos de este trabajo son; 1) Desarrollar una arquitectura de control no lineal garantizando la estabilidad del sistema teleoperado, 2) Evaluar la estabilidad del sistema considerando el retardo en la comunicación, y 3) Implementación de los algoritmos desarrollados para probar el desempeño de los mismos en un sistema experimental de 3 grados de libertad. A través de la teoría de Estabilidad de Lyapunov y el funcional Lyapunov-Krasovskii, se demuestra que el sistema de lazo cerrado es asintóticamente estable. Estas conclusiones de estabilidad se han obtenido mediante la integración de la función de Lyapunov y aplicando el Lema de Barbalat. Se demuestra también que se logra sincronizar las posiciones del manipulador local y remoto cuando el operador humano no mueve el manipulador local y el manipulador remoto se mueve libremente. El esquema de control propuesto se ha validado mediante simulación y en forma experimental empleando un sistema de teleoperación real desarrollado en esta tesis doctoral y que consta de un un manipulador serie planar de tres grados de libertad, un manipulador local, PHANTOM Omni, el cual es un dispositivo haptico fabricado que consta de 3 grados de libertad (en fuerza) y que proporciona realimentación de fuerza en los ejes x,y,z. El control en tiempo real se ha diseñado usando el Sistema Operativo en Tiempo Real QuaRC de QUARC en el lado local y el Simulink Real-Time Windows TargetTM en el lado remoto. Para finalizar el resumen se destaca el impacto de esta tesis en el mundo científico a través de los resultados publicados: 2 artículos en revistas con índice de impacto , 1 artículo en una revista indexada en Sistemas, Cibernética e Informática, 7 artículos en congresos y ha obtenido un premio en la 9a. Conferencia Iberoamericana en Sistemas, Cibernética e Informática, 2010. ABSTRACT When the physical separation between the local and remote system is relatively short, the delay is not noticeable; however, when the local manipulator and the remote manipulator are at a far distance from each other, the time delay is no longer negligible and negatively influences the performance of the task. The time delay in a control system introduces a phase delay which in turn degrades the system performance and cause instability. Teleoperation systems can benefit from the ability to be in two places simultaneously, however, the use of Internet and other packet switched networks, such as Internet2, imposes varying time delays, making established control schemes to develop solutions to address these instabilities caused by different time delays. In this thesis work we present a modeling and analysis of a nonlinear bilateral teloperation system of n degrees of freedom controlled by state convergence strategy. Communication between the local and remote site is via a communication channel with time delay. The analysis presented in this work considers that the time-delay can be constant or variable. The main objectives of this work are; 1) Develop a nonlinear control schemes to ensure the stability of the teleoperated system, 2) Evaluate the system stability considering the delay in communication, and 3) Implementation of algorithms developed to test the performance of the teleoperation system in an experimental system of 3 degrees of freedom. Through the Theory of Stability of Lyapunov and the functional Lyapunov-Krasovskii, one demonstrates that the closed loop system is asymptotically stable.. The conclusions about stability were obtained by integration of the Lyapunov function and applying Barbalat Lemma. It further shows that the positions of the local and remote manipulator are synchronize when the human operator stops applying a constant force and the remote manipulator does not interact with the environment. The proposed control scheme has been validated by means of simulation and in experimental form using a developed system of real teleoperation in this doctoral thesis, which consists of a series planar manipulator of three degrees of freedom, a local manipulator, PHANTOM Omni, which is an haptic device that consists of 3 degrees of freedom (in force) and that provide feeback force in x-axis, and, z. The control in real time has been designed using the Operating system in Real time QuaRC of Quanser in the local side and the Simulink Real-Time Windows Target in the remote side. In order to finalize the summary, the highlights impact of this thesis in the scientific world are shows through the published results: 2 articles in Journals with impact factor, one article in a indexed Journal on Systemics, Cybernetics and Informatics, 7 articles in Conferences and has won an award in 9a. Conferencia Iberoamericana en Sistemas, Cibernética e Informática, 2010.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an approach for the detection, localization and following of dynamic terrestrial objects using a mini-UAV. The development is intended to be used for surveillance of large infrastructures. The detection algorithm is based on finding several pre-defined characteristics of the target, such as color, shape and size. The process used to localize the target, once it is detected, is based on an inversion of the Pinhole camera model. The task of following the Summit XL was designed to keep the target inside the field of view of the camera, and it was implemented in the form of a PID controller. The system has been tested both in simulation and with real robots, showing promising results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Protein folding occurs on a time scale ranging from milliseconds to minutes for a majority of proteins. Computer simulation of protein folding, from a random configuration to the native structure, is nontrivial owing to the large disparity between the simulation and folding time scales. As an effort to overcome this limitation, simple models with idealized protein subdomains, e.g., the diffusion–collision model of Karplus and Weaver, have gained some popularity. We present here new results for the folding of a four-helix bundle within the framework of the diffusion–collision model. Even with such simplifying assumptions, a direct application of standard Brownian dynamics methods would consume 10,000 processor-years on current supercomputers. We circumvent this difficulty by invoking a special Brownian dynamics simulation. The method features the calculation of the mean passage time of an event from the flux overpopulation method and the sampling of events that lead to productive collisions even if their probability is extremely small (because of large free-energy barriers that separate them from the higher probability events). Using these developments, we demonstrate that a coarse-grained model of the four-helix bundle can be simulated in several days on current supercomputers. Furthermore, such simulations yield folding times that are in the range of time scales observed in experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of atmospheric aerosols and regional haze from air pollution on the yields of rice and winter wheat grown in China is assessed. The assessment is based on estimates of aerosol optical depths over China, the effect of these optical depths on the solar irradiance reaching the earth’s surface, and the response of rice and winter wheat grown in Nanjing to the change in solar irradiance. Two sets of aerosol optical depths are presented: one based on a coupled, regional climate/air quality model simulation and the other inferred from solar radiation measurements made over a 12-year period at meteorological stations in China. The model-estimated optical depths are significantly smaller than those derived from observations, perhaps because of errors in one or both sets of optical depths or because the data from the meteorological stations has been affected by local pollution. Radiative transfer calculations using the smaller, model-estimated aerosol optical depths indicate that the so-called “direct effect” of regional haze results in an ≈5–30% reduction in the solar irradiance reaching some of China’s most productive agricultural regions. Crop-response model simulations suggest an ≈1:1 relationship between a percentage increase (decrease) in total surface solar irradiance and a percentage increase (decrease) in the yields of rice and wheat. Collectively, these calculations suggest that regional haze in China is currently depressing optimal yields of ≈70% of the crops grown in China by at least 5–30%. Reducing the severity of regional haze in China through air pollution control could potentially result in a significant increase in crop yields and help the nation meet its growing food demands in the coming decades.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The vibrational energy relaxation of carbon monoxide in the heme pocket of sperm whale myoglobin was studied by using molecular dynamics simulation and normal mode analysis methods. Molecular dynamics trajectories of solvated myoglobin were run at 300 K for both the δ- and ɛ-tautomers of the distal His-64. Vibrational population relaxation times of 335 ± 115 ps for the δ-tautomer and 640 ± 185 ps for the ɛ-tautomer were estimated by using the Landau–Teller model. Normal mode analysis was used to identify those protein residues that act as the primary “doorway” modes in the vibrational relaxation of the oscillator. Although the CO relaxation rates in both the ɛ- and δ-tautomers are similar in magnitude, the simulations predict that the vibrational relaxation of the CO is faster in the δ-tautomer with the distal His playing an important role in the energy relaxation mechanism. Time-resolved mid-IR absorbance measurements were performed on photolyzed carbonmonoxy hemoglobin (Hb13CO). From these measurements, a T1 time of 600 ± 150 ps was determined. The simulation and experimental estimates are compared and discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Experimental and modeling efforts suggest that rhythms in the CA1 region of the hippocampus that are in the beta range (12–29 Hz) have a different dynamical structure than that of gamma (30–70 Hz). We use a simplified model to show that the different rhythms employ different dynamical mechanisms to synchronize, based on different ionic currents. The beta frequency is able to synchronize over long conduction delays (corresponding to signals traveling a significant distance in the brain) that apparently cannot be tolerated by gamma rhythms. The synchronization properties are consistent with data suggesting that gamma rhythms are used for relatively local computations whereas beta rhythms are used for higher level interactions involving more distant structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Photosynthetic organisms fuel their metabolism with light energy and have developed for this purpose an efficient apparatus for harvesting sunlight. The atomic structure of the apparatus, as it evolved in purple bacteria, has been constructed through a combination of x-ray crystallography, electron microscopy, and modeling. The detailed structure and overall architecture reveals a hierarchical aggregate of pigments that utilizes, as shown through femtosecond spectroscopy and quantum physics, elegant and efficient mechanisms for primary light absorption and transfer of electronic excitation toward the photosynthetic reaction center.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Com o objetivo de aumentar o lucro de plantas químicas, a Otimização em Tempo Real (RTO) é uma ferramenta que busca determinar as condições ótimas operacionais do processo em estado estacionário, respeitando as restrições operacionais estabelecidas. Neste trabalho foi realizada a implementação prática de um ciclo RTO em um processo de destilação por recompressão de vapor (VRD), propileno-propano, da Refinaria de Paulínia (Petrobras S.A.), a partir de dados históricos da planta. Foram consideradas as principais etapas de um ciclo clássico de RTO: identificação de estado estacionário, reconciliação de dados, estimação de parâmetros e otimização econômica. Essa unidade foi modelada, simulada e otimizada em EMSO (Environment for Modeling, Simulation and Optimization), um simulador de processos orientado a equações desenvolvido no Brasil. Foram analisados e comparados dois métodos de identificação de estado estacionário, um baseado no teste estatístico F e outro baseado em wavelets. Ambos os métodos tiveram resultados semelhantes e mostraram-se capazes de identificar os estados estacionários de forma satisfatória, embora seja necessário o ajuste de parâmetros na sua implementação. Foram identificados alguns pontos estacionários para serem submetidos ao ciclo RTO e foi possível verificar a importância de partir de um estado estacionário para a continuidade do ciclo, já que essa é uma premissa do método. A partir dos pontos analisados, os resultados deste estudo mostram que o RTO é capaz de aumentar o ganho econômico entre 2,5-24%, dependendo das condições iniciais consideradas, o que pode representar ganhos de até 18 milhões de dólares por ano. Além disso, para essa unidade, verificou-se que o compressor é um equipamento limitante no aumento de ganho econômico do processo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

N.B. reproduced with permission of Peter Lang Verlag. For citation, please, use the original reference, that is Campos Pardillos, M.A. and Balteiro Fernández, I. 2009. “Building bridges… and properties aplenty: cultural problems in Spanish real estate marketing for prospective British buyers”. In: Guillén-Nieto, V., C. Marimón-Llorca and C. Vargas-Sierra. Eds. Intercultural Business Communication and Simulation and Gaming Methodology. Bern: Peter Lang. Pp. 155-174.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time–period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chlorides induce local corrosion in the steel reinforcements when reaching the bar surface. The measurement of the rate of ingress of these ions, is made by mathematically fitting the so called “error function equation” into the chloride concentration profile, obtaining so the diffusion coefficient and the chloride concentration at the concrete surface. However, the chloride profiles do not always follow Fick’s law by having the maximum concentration at the concrete surface, but often the profile shows a maximum concentration more in the interior, which indicates a different composition and performance of the most external concrete layer with respect to the internal zones. The paper presents a procedure prepared during the time of the RILEM TC 178-TMC: “Testing and modeling chloride penetration in concrete”, which suggests neglecting the external layer where the chloride concentration increases and using the maximum as an “apparent” surface concentration, called C max and to fit the error function equation into the decreasing concentration profile towards the interior. The prediction of evolution should be made also from the maximum.