966 resultados para ORDER ACCURACY APPROXIMATIONS
Resumo:
La mecanización de las labores del suelo es la causa, por su consumo energético e impacto directo sobre el medio ambiente, que más afecta a la degradación y pérdida de productividad de los suelos. Entre los factores de disminución de la productividad se deben considerar la compactación, la erosión, el encostramiento y la pérdida de estructura. Todo esto obliga a cuidar el manejo agrícola de los suelos tratando de mejorar las condiciones del suelo y elevar sus rendimientos sin comprometer aspectos económicos, ecológicos y ambientales. En el presente trabajo se adecuan los parámetros constitutivos del modelo de Drucker Prager Extendido (DPE) que definen la fricción y la dilatancia del suelo en la fase de deformación plástica, para minimizar los errores en las predicciones durante la simulación de la respuesta mecánica de un Vertisol mediante el Método de Elementos Finitos. Para lo cual inicialmente se analizaron las bases teóricas que soportan este modelo, se determinaron las propiedades y parámetros físico-mecánicos del suelo requeridos como datos de entrada por el modelo, se determinó la exactitud de este modelo en las predicciones de la respuesta mecánica del suelo, se estimaron mediante el método de aproximación de funciones de Levenberg-Marquardt los parámetros constitutivos que definen la trayectoria de la curva esfuerzo-deformación plástica. Finalmente se comprobó la exactitud de las predicciones a partir de las adecuaciones realizadas al modelo. Los resultados permitieron determinar las propiedades y parámetros del suelo, requeridos como datos de entrada por el modelo, mostrando que su magnitud está en función su estado de humedad y densidad, además se obtuvieron los modelos empíricos de estas relaciones exhibiendo un R2>94%. Se definieron las variables que provocan las inexactitudes del modelo constitutivo (ángulo de fricción y dilatancia), mostrando que las mismas están relacionadas con la etapa de falla y deformación plástica. Finalmente se estimaron los valores óptimos de estos ángulos, disminuyendo los errores en las predicciones del modelo DPE por debajo del 4,35% haciéndelo adecuado para la simulación de la respuesta mecánica del suelo investigado. ABSTRACT The mechanization using farming techniques is one of the main factors that affects the most the soil, causing its degradation and loss of productivity, because of its energy consumption and direct impact on the environment. Compaction, erosion, crusting and loss of structure should be considered among the factors that decrease productivity. All this forces the necessity to take care of the agricultural-land management trying to improve soil conditions and increase yields without compromising economic, ecological and environmental aspects. The present study was aimed to adjust the parameters of the Drucker-Prager Extended Model (DPE), defining friction and dilation of soil in plastic deformation phase, in order to minimize the error of prediction when simulating the mechanical response of a Vertisol through the fine element method. First of all the theoretic fundamentals that withstand the model were analyzed. The properties and physical-mechanical parameters of the soil needed as input data to initialize the model, were established. And the precision of the predictions for the mechanical response of the soil was assessed. Then the constitutive parameters which define the path of the plastic stress-strain curve were estimated through Levenberg-Marquardt method of function approximations. Lastly the accuracy of the predictions from the adequacies made to the model was tested. The results permitted to determine those properties and parameters of the soil, needed in order to initialize the model. It showed that their magnitude is in function of density and humidity. Moreover, the empirical models from these relations were obtained: R2>94%. The variables producing inaccuracies in the constitutive model (angle of repose and dilation) were defined, and there was showed that they are linked with the plastic deformation and rupture point. Finally the optimal values of these angles were established, obtaining thereafter error values for the DPE model under 4, 35%, and making it suitable for the simulation of the mechanical response of the soil under study.
Resumo:
Leyendo distintos artículos en la Revista de Obras Públicas (Jiménez Salas, 1945) uno recuerda a las grandes figuras como Coulomb (1773), Poncelet (1840), Rankine (1856), Culmann (1866), Mohr (1871), Boussinesq (1876) y otros muchos, que construyeron la base de un conocimiento que poco a poco irían facilitando la complicada tarea que suponía la construcción. Pero sus avances eran aproximaciones que presentaban notables diferencias frente al comportamiento de la naturaleza. Esas discrepancias con la naturaleza llegó un momento que se hicieron demasiado patentes. Importantes asientos en la construcción de los modernos edificios, rotura de presas de materiales sueltos y grandes corrimientos de tierras, por ejemplo durante la construcción del canal de Panamá, llevaron a la Sociedad Americana de Ingenieros Civiles (ASCE) a crear un comité que analizase las prácticas de la construcción de la época. Hechos similares se producían en Europa, por ejemplo en desmontes para ferrocarriles, que en el caso de Suecia supusieron unas cuantiosas perdidas materiales y humanas. El ingeniero austriaco-americano Karl Terzaghi (1883) había podido comprobar, en su práctica profesional, la carencia de conocimientos para afrontar muchos de los retos que la naturaleza ofrecía. Inicialmente buscó la respuesta en la geología pero encontró que ésta carecía de la definición necesaria para la práctica de la ingeniería, por lo que se lanzó a una denodada tarea investigadora basada en el método experimental. Comenzó en 1917 con escasos medios, pero pronto llegó a desarrollar algunos ensayos que le permitieron establecer los primeros conceptos de una nueva ciencia, la Mecánica de Suelos. Ciencia que ve la luz en 1925 con la publicación de su libro Erdbaumechanik auf bodenphysikalischer Grundlage. Rápidamente otras figuras empezaron a hacer sus contribuciones científicas y de divulgación, como es el caso del ingeniero austriaco-americano Arthur Casagrande (1902), cuya iniciativa de organizar el primer Congreso Internacional de Mecánica de Suelos e Ingeniería de Cimentaciones proporcionó el altavoz que necesitaba esa nueva ciencia para su difusión. Al mismo tiempo, más figuras internacionales se fueron uniendo a este período de grandes avances e innovadores puntos de vista. Figuras como Alec Skempton (1914) en el Reino Unido, Ralph Peck (1912) en los Estados Unidos o Laurits Bjerrum (1918) en Noruega sobresalieron entre los grandes de la época. Esta tesis investiga las vidas de estos geotécnicos, artífices de múltiples avances científicos de la nueva ciencia denominada Mecánica de Suelos. Todas estas grandes figuras de la geotecnia fueron presidentes, en distintos periodos, de la Sociedad Internacional de Mecánica de Suelos e Ingeniería de Cimentaciones. Se deja constancia de ello en las biografías que han sido elaboradas a partir de fuentes de variada procedencia y de los datos cruzados encontrados sobre estos extraordinarios geotécnicos. Así, las biografías de Terzaghi, Casagrande, Skempton, Peck y Bjerrum contribuyen no solo a su conocimiento individual sino que constituyen conjuntamente un punto de vista privilegiado para la comprensión de los acontecimientos vividos por la Mecánica de Suelos en el segundo tercio del siglo XX, extendiéndose en algunos casos hasta los albores del siglo XXI. Las aportaciones científicas de estos geotécnicos encuentran también su lugar en la parte técnica de esta tesis, en la que sus contribuciones individuales iniciales que configuran los distintos capítulos conservan sus puntos de vista originales, lo que permite tener una visión de los principios de la Mecánica de Suelos desde su mismo origen. On reading several articles in the journal, Revista de Obras Públicas (Jiménez Salas, 1945), one recalls such leading figures as Coulomb (1773), Poncelet (1840), Rankine (1856), Culmann (1866), Mohr (1871) and Boussinesq (1876) among many others, who created the basis of scientific knowledge that would make the complicated task of construction progressively easier. However, their advances were approximations which suffered considerable discrepancies when faced with the behaviour of the forces of nature. There came a time when such discrepancies became all too evident. Substantial soil settlements when constructing modern buildings, embankment dam failures and grave landslides, during the construction of the Panama Canal for example, led the American Society of Civil Engineers (ASCE) to form a committee in order to analyse construction practices of the time. Similar incidents had taken place in Europe, for example with railway slides, which in the case of Sweden, had resulted in heavy losses in both materials and human lives. During the practice of his career, the Austrian-American engineer Karl Terzaghi (1883) had encountered the many challenges posed by the forces of nature and the lack of knowledge at his disposal with which to overcome them. Terzaghi first sought a solution in geology only to discover that this lacked the necessary accuracy for the practice of engineering. He therefore threw himself into tireless research based on the experimental method. He began in 1917 on limited means but soon managed to develop several tests, which would allow him to establish the basic fundamentals of a new science; Soil Mechanics, a science which first saw the light of day on the publication of Terzaghi’s book, Erdbaumechanik auf bodenphysikalischer Grundlage. Other figures were quick to make their own scientific contributions. Such was the case of Austrian-American engineer, Arthur Casagrande (1902), whose initiative to organize the first International Congress of Soil Mechanics and Foundation Engineering provided the springboard that this science needed. At the same time, other international figures were becoming involved in this period of great advances and innovative concepts. Figures including the likes of Alec Skempton (1914) in the United Kingdom, Ralph Peck (1912) in the United States, and Laurits Bjerrum (1918) in Norway stood out amongst the greatest of their time. This thesis investigates the lives of these geotechnical engineers to whom we are indebted for a great many scientific advances in this new science known as Soil Mechanics. Moreover, each of these eminent figures held the presidency of the International Society of Soil Mechanics and Foundation Engineering, record of which can be found in their biographies, drawn from diverse sources, and by crosschecking and referencing all the available information on these extraordinary geotechnical engineers. Thus, the biographies of Terzaghi, Casagrande, Skempton, Peck and Bjerrum not only serve to provide knowledge on the individual, but moreover, as a collective, they present us with an exceptional insight into the important developments which took place in Soil Mechanics in the second third of the 20th century, and indeed, in some cases, up to the dawn of the 21st. The scientific contributions of these geotechnical engineers also find their place in the technical part of this thesis in which the initial individual contributions which make up several chapters retain their original approaches allowing us a view of the principles of Soil Mechanics from its very beginnings.
Resumo:
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
Resumo:
The mathematical underpinning of the pulse width modulation (PWM) technique lies in the attempt to represent “accurately” harmonic waveforms using only square forms of a fixed height. The accuracy can be measured using many norms, but the quality of the approximation of the analog signal (a harmonic form) by a digital one (simple pulses of a fixed high voltage level) requires the elimination of high order harmonics in the error term. The most important practical problem is in “accurate” reproduction of sine-wave using the same number of pulses as the number of high harmonics eliminated. We describe in this paper a complete solution of the PWM problem using Padé approximations, orthogonal polynomials, and solitons. The main result of the paper is the characterization of discrete pulses answering the general PWM problem in terms of the manifold of all rational solutions to Korteweg-de Vries equations.
Resumo:
An electronic phase with coexisting magnetic and ferroelectric order is predicted for graphene ribbons with zigzag edges. The electronic structure of the system is described with a mean-field Hubbard model that yields results very similar to those of density functional calculations. Without further approximations, the mean-field theory is recasted in terms of a BCS wave function for electron-hole pairs in the edge bands. The BCS coherence present in each spin channel is related to spin-resolved electric polarization. Although the total electric polarization vanishes, due to an internal phase locking of the BCS state, strong magnetoelectric effects are expected in this system. The formulation naturally accounts for the two gaps in the quasiparticle spectrun, Δ0 and Δ1, and relates them to the intraband and interband self-energies.
Resumo:
Subpixel techniques are commonly used to increase the spatial resolution in tracking tasks. Object tracking with targets of known shape permits obtaining information about object position and orientation in the three-dimensional space. A proper selection of the target shape allows us to determine its position inside a plane and its angular and azimuthal orientation under certain limits. Our proposal is demonstrated both numerical and experimentally and provides an increase the accuracy of more than one order of magnitude compared to the nominal resolution of the sensor. The experiment has been performed with a high-speed camera, which simultaneously provides high spatial and temporal resolution, so it may be interesting for some applications where this kind of targets can be attached, such as vibration monitoring and structural analysis.
Resumo:
Light confinement and controlling an optical field has numerous applications in the field of telecommunications for optical signals processing. When the wavelength of the electromagnetic field is on the order of the period of a photonic microstructure, the field undergoes reflection, refraction, and coherent scattering. This produces photonic bandgaps, forbidden frequency regions or spectral stop bands where light cannot exist. Dielectric perturbations that break the perfect periodicity of these structures produce what is analogous to an impurity state in the bandgap of a semiconductor. The defect modes that exist at discrete frequencies within the photonic bandgap are spatially localized about the cavity-defects in the photonic crystal. In this thesis the properties of two tight-binding approximations (TBAs) are investigated in one-dimensional and two-dimensional coupled-cavity photonic crystal structures We require an efficient and simple approach that ensures the continuity of the electromagnetic field across dielectric interfaces in complex structures. In this thesis we develop \textrm{E} -- and \textrm{D} --TBAs to calculate the modes in finite 1D and 2D two-defect coupled-cavity photonic crystal structures. In the \textrm{E} -- and \textrm{D} --TBAs we expand the coupled-cavity \overrightarrow{E} --modes in terms of the individual \overrightarrow{E} -- and \overrightarrow{D} --modes, respectively. We investigate the dependence of the defect modes, their frequencies and quality factors on the relative placement of the defects in the photonic crystal structures. We then elucidate the differences between the two TBA formulations, and describe the conditions under which these formulations may be more robust when encountering a dielectric perturbation. Our 1D analysis showed that the 1D modes were sensitive to the structure geometry. The antisymmetric \textrm{D} mode amplitudes show that the \textrm{D} --TBA did not capture the correct (tangential \overrightarrow{E} --field) boundary conditions. However, the \textrm{D} --TBA did not yield significantly poorer results compared to the \textrm{E} --TBA. Our 2D analysis reveals that the \textrm{E} -- and \textrm{D} --TBAs produced nearly identical mode profiles for every structure. Plots of the relative difference between the \textrm{E} and \textrm{D} mode amplitudes show that the \textrm{D} --TBA did capture the correct (normal \overrightarrow{E} --field) boundary conditions. We found that the 2D TBA CC mode calculations were 125-150 times faster than an FDTD calculation for the same two-defect PCS. Notwithstanding this efficiency, the appropriateness of either TBA was found to depend on the geometry of the structure and the mode(s), i.e. whether or not the mode has a large normal or tangential component.
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
Objective To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in a study, and to evaluate a study's generalisability. Methods The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two day consensus meeting, with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25 item checklist, by using evidence, whenever available. A prototype of a flow diagram provides information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation and the reference standard, or both. Conclusions Evaluation of research depends on complete and accurate reporting. If medical journals adopt the STARD checklist and flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Resumo:
In the past, the accuracy of facial approximations has been assessed by resemblance ratings (i.e., the comparison of a facial approximation directly to a target individual) and recognition tests (e.g., the comparison of a facial approximation to a photo array of faces including foils and a target individual). Recently, several research studies have indicated that recognition tests hold major strengths in contrast to resemblance ratings. However, resemblance ratings remain popularly employed and/or are given weighting when judging facial approximations, thus indicating that no consensus has been reached. This study aims to further investigate the matter by comparing the results of resemblance ratings and recognition tests for two facial approximations which clearly differed in their morphological appearance. One facial approximation was constructed by an experienced practitioner privy to the appearance of the target individual (practitioner had direct access to an antemortem frontal photograph during face construction), while the other facial approximation was constructed by a novice under blind conditions. Both facial approximations, whilst clearly morphologically different, were given similar resemblance scores even though recognition test results produced vastly different results. One facial approximation was correctly recognized almost without exception while the other was not correctly recognized above chance rates. These results suggest that resemblance ratings are insensitive measures of the accuracy of facial approximations and lend further weight to the use of recognition tests in facial approximation assessment. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.
Resumo:
We study the effects of temperature and strain on the spectra of the first and second-order diffraction attenuation bands of a single long-period grating (LPG) in step-index fibre. The primary and second-order attenuation bands had comparable strength with the second-order bands appearing in the visible and near-infra red parts of the spectrum. Using first and second-order diffraction to the eighth cladding mode a sensitivity matrix was obtained with limiting accuracy given by cross-sensitivity of ~1.19% of the measurement. The sensing scheme presented as a limiting temperature and strain resolution of ±0.7 °C and ~±25 µ.
Resumo:
Aim: The aim of this study was to evaluate the practicality and accuracy of tonometers used in routine clinical practice for established keratoconus (KC). Methods: This was a prospective study of 118 normal and 76 keratoconic eyes where intraocular pressure (IOP) was measured in random order using the Goldman applanation tonometer (GAT), Pascal dynamic contour tonometer (DCT), Reichert ocular response analyser (ORA) and TonoPen XL tonometer. Corneal hysteresis (CH) and corneal resistance factor (CRF), as calculated by the ORA, were recorded. Central corneal thickness (CCT) was measured using an ultrasound pachymeter. Results: The difference in IOP values between instruments was highly significant in both study groups (p<0.001). All other IOP measures were significantly higher than those for GAT, except for the Goldmann-correlated IOP (average of the two applanation pressure points) (IOPg) as measured by ORA in the control group and the CH-corrected IOP (corneal-compensated IOP value) (IOPcc) measures in the KC group. CCT, CH and CRF were significantly less in the KC group (p<0.001). Apart from the DCT, all techniques tended to measure IOP higher in eyes with thicker corneas. Conclusion: The DCT and the ORA are currently the most appropriate tonometers to use in KC for the measurement of IOPcc. Corneal factors such as CH and CRT may be of more importance than CCT in causing inaccuracies in applanation tonometry techniques.
Resumo:
We develop a perturbation analysis that describes the effect of third-order dispersion on the similariton pulse solution of the nonlinear Schrodinger equation in a fibre gain medium. The theoretical model predicts with sufficient accuracy the pulse structural changes induced, which are observed through direct numerical simulations.
Resumo:
Recent developments in nonlinear optics reveal an interesting class of pulses with a parabolic intensity profile in the energy-containing core and a linear frequency chirp that can propagate in a fiber with normal group-velocity dispersion. Parabolic pulses propagate in a stable selfsimilar manner, holding certain relations (scaling) between pulse power, width, and chirp parameter. In the additional presence of linear amplification, they enjoy the remarkable property of representing a common asymptotic state (or attractor) for arbitrary initial conditions. Analytically, self-similar (SS) parabolic pulses can be found as asymptotic, approximate solutions of the nonlinear Schr¨odinger equation (NLSE) with gain in the semi-classical (largeamplitude/small-dispersion) limit. By analogy with the well-known stable dynamics of solitary waves - solitons, these SS parabolic pulses have come to be known as similaritons. In practical fiber systems, inherent third-order dispersion (TOD) in the fiber always introduces a certain degree of asymmetry in the structure of the propagating pulse, eventually leading to pulse break-up. To date, there is no analytic theory of parabolic pulses under the action of TOD. Here, we develop aWKB perturbation analysis that describes the effect of weak TOD on the parabolic pulse solution of the NLSE in a fiber gain medium. The induced perturbation in phase and amplitude can be found to any order. The theoretical model predicts with sufficient accuracy the pulse structural changes induced by TOD, which are observed through direct numerical NLSE simulations.