97 resultados para 3D Modeling
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
The material presented in the these notes covers the sessions Modelling of electromechanical systems, Passive control theory I and Passive control theory II of the II EURON/GEOPLEX Summer School on Modelling and Control of Complex Dynamical Systems.We start with a general description of what an electromechanical system is from a network modelling point of view. Next, a general formulation in terms of PHDS is introduced, and some of the previous electromechanical systems are rewritten in this formalism. Power converters, which are variable structure systems (VSS), can also be given a PHDS form.We conclude the modelling part of these lectures with a rather complex example, showing the interconnection of subsystems from several domains, namely an arrangement to temporally store the surplus energy in a section of a metropolitan transportation system based on dc motor vehicles, using either arrays of supercapacitors or an electric poweredflywheel. The second part of the lectures addresses control of PHD systems. We first present the idea of control as power connection of a plant and a controller. Next we discuss how to circumvent this obstacle and present the basic ideas of Interconnection and Damping Assignment (IDA) passivity-based control of PHD systems.
Resumo:
Based on provious (Hemelrijk 1998; Puga-González, Hildenbrant & Hemelrijk 2009), we have developed an agent-based model and software, called A-KinGDom, which allows us to simulate the emergence of the social structure in a group of non-human primates. The model includes dominance and affiliative interactions and incorporate s two main innovations (preliminary dominance interactions and a kinship factor), which allow us to define four different attack and affiliative strategies. In accordance with these strategies, we compared the data obtained under four simulation conditions with the results obtained in a provious study (Dolado & Beltran 2012) involving empirical observations of a captive group of mangabeys (Cercocebus torquatus)
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Durante los últimos años el Institut Català d’Arquelogia Clàssica, el Museu d’Història de Tarragona, contando con la colaboración de la Generalitat de Catalunya, han desarrallado el proyecto Planimetría Arqueológica de Tárraco, destinado a la elaboración de una planta arqueológica global en la cual se recogieran intervenciones y noticias referentes a los hallazgos arqueológicos existentes. Este trabajo fue publicado utilizando como plataforma de trabajo un SIG construido para tal fin (Macias et al. 2007). Sin embargo, un problema de difícil solución arqueológica venía dado por las transformaciones urbanísticas de la ciudad, sufridas en su mayor parte a lo largo de los siglos XIX y XX. Éstas habían provocado la pérdida irremediable de gran parte de la elevación que acogiera la ciudad romana, cambiando substancialmente su aspecto original. Ante esta situación y como proyecto paralelo a la realización de la Planimetría Arqueológica de Tarragona se plantearon formas de cubrir este vacío. Se presenta en esta comunicación una propuesta metodológica para la reconstrucción de los grandes «vacíos topográficos » originados por la evolución urbanística de Tarragona mediante la obtención e integración en un SIG de diversos tipos de información documental. En estas zonas rebajadas no resulta posible la obtención de información estratigráfica y arqueológica, por lo que es imprescindible la definición de vías metodológicas alternativas basadas en la extrapolación de datos extraídos de la cartografía histórica, panorámicas del XVI o fotografías tomadas en los siglos XIX y XX. Esta técnica permite aplicar los resultados obtenidos en los nuevos análisis interpretativos, complementando así la interpretación arqueológica de la topografía urbana de la ciudad romana. A partir de esta información, y aplicando funciones y técnicas de interpolación propias de un GIS, se propone aquí un modelo de relieve de la ciudad de Tarraco.
Resumo:
Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LFand WF). The two formats suggest two alternative model approaches for analyzing paneldata: (i) univariate regression with varying intercept; and (ii) multivariate regression withlatent variables (a particular case of structural equation model, SEM). The present papercompares the two approaches showing in which circumstances they yield equivalent?insome cases, even numerically equal?results. We show that the univariate approach givesresults equivalent to the multivariate approach when restrictions of time invariance (inthe paper, the TI assumption) are imposed on the parameters of the multivariate model.It is shown that the restrictions implicit in the univariate approach can be assessed bychi-square difference testing of two nested multivariate models. In addition, commontests encountered in the econometric analysis of panel data, such as the Hausman test, areshown to have an equivalent representation as chi-square difference tests. Commonalitiesand differences between the univariate and multivariate approaches are illustrated usingan empirical panel data set of firms' profitability as well as a simulated panel data.
Resumo:
The widespread implementation of GIS-based 3D topographical models has been a great aid in the development and testing of archaeological hypotheses. In this paper, a topographical reconstruction of the ancient city of Tarraco, the Roman capital of the Tarraconensis province, is presented. This model is based on topographical data obtained through archaeological excavations, old photographic documentation, georeferenced archive maps depicting the pre-modern city topography, modern detailed topographical maps and differential GPS measurements. The addition of the Roman urban architectural features to the model offers the possibility to test hypotheses concerning the ideological background manifested in the city shape. This is accomplished mainly through the use of 3D views from the main city accesses. These techniques ultimately demonstrate the ‘theatre-shaped’ layout of the city (to quote Vitrubius) as well as its southwest oriented architecture, whose monumental character was conceived to present a striking aspect to visitors, particularly those arriving from the sea.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
The goal of this project is the integration of a set of technologies (graphics, physical simulation, input), with the azm of assembling an application framework in phyton. In this research, a set of key introductory concepts are presented in adoption of a deep study of the state of the art of 3D applications. Phyton is selected an justified as the programing language due to the features and advantages that it offers in front of other languages. Finally the design and implementation of the framework is presented in the last chapter with some client application examples.
Resumo:
Both the intermolecular interaction energies and the geometries for M ̄ thiophene, M ̄ pyrrole, M n+ ̄ thiophene, and M n+ ̄ pyrrole ͑with M = Li, Na, K, Ca, and Mg; and M n+ = Li+ , Na+ , K+ , Ca2+, and Mg2+͒ have been estimated using four commonly used density functional theory ͑DFT͒ methods: B3LYP, B3PW91, PBE, and MPW1PW91. Results have been compared to those provided by HF, MP2, and MP4 conventional ab initio methods. The PBE and MPW1PW91 are the only DFT methods able to provide a reasonable description of the M ̄ complexes. Regarding M n+ ̄ complexes, the four DFT methods have been proven to be adequate in the prediction of these electrostatically stabilized systems, even though they tend to overestimate the interaction energies.