854 resultados para Round-off Errors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the sequences of round-off errors of the orbits of a discretized planar rotation, from a probabilistic angle. It was shown [Bosio & Vivaldi, 2000] that for a dense set of parameters, the discretized map can be embedded into an expanding p-adic dynamical system, which serves as a source of deterministic randomness. For each parameter value, these systems can generate infinitely many distinct pseudo-random sequences over a finite alphabet, whose average period is conjectured to grow exponentially with the bit-length of the initial condition (the seed). We study some properties of these symbolic sequences, deriving a central limit theorem for the deviations between round-off and exact orbits, and obtain bounds concerning repetitions of words. We also explore some asymptotic problems computationally, verifying, among other things, that the occurrence of words of a given length is consistent with that of an abstract Bernoulli sequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"May 10, 1962."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flow around circular smooth fixed cylinder in a large range of Reynolds numbers is considered in this paper. In order to investigate this canonical case, we perform CFD calculations and apply verification & validation (V&V) procedures to draw conclusions regarding numerical error and, afterwards, assess the modeling errors and capabilities of this (U)RANS method to solve the problem. Eight Reynolds numbers between Re = 10 and Re 5 x 10(5) will be presented with, at least, four geometrically similar grids and five discretization in time for each case (when unsteady), together with strict control of iterative and round-off errors, allowing a consistent verification analysis with uncertainty estimation. Two-dimensional RANS, steady or unsteady, laminar or turbulent calculations are performed. The original 1994 k - omega SST turbulence model by Menter is used to model turbulence. The validation procedure is performed by comparing the numerical results with an extensive set of experimental results compiled from the literature. [DOI: 10.1115/1.4007571]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Els alumnes i les alumnes de primer cicle d'ESO s'enfronten amb les dificultats de la nova etapa i la necessitat de demostrar les seves competències lingüístiques, quant al redactat i la correcció en l'elaboració de textos diversos. Les noves tecnologies de la informació i la comunicació poden ajudar (i han de propiciar-ho) a desenvolupar aquestes capacitats, i han de ser un element motivador perquè els nois i les noies sentin la necessitat de crear produccions escrites pròpies, adequades lingüísticament, coherents i amb les idees ben organitzades.És necessari, a més, que l'avaluació d'aquestes tasques els serveixi per millorar l'expressió en llengua catalana, gramaticalment i ortogràficament, tot ajudant-los a establir sistemes d'autocorrecció i rectificació d'errors.L'exposició pública (en un mitjà com el d'internet) de les seves obres hauria de reblar l'aspecte de la motivació en la creació d'escrits, de manera que els aporti noves expectatives de comunicació i els animi a participar de l'espai virtual que els ofereix aquesta finestra oberta al món. El meu treball té la intenció de ser un projecte educatiu per aconseguir una millora en l'expressió escrita dels alumnes del primer curs d'ESO amb l'ajuda de les eines de les Tecnologies de la Informació i la Comunicació.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study is to investigate the orthodontic and orthopaedic real effects of the Klammt's Elastic Open Activator (EOA) in 25 Class II Division 1 patients in growing period. We wanted to determine statistically the cephalometrics changes produced in the patients, comparing the lateral cranium teleradiographies we took for the diagnosis with the ones we took at the end of treatment. At the end of this study we confirm that by using the EOA we obtained the desired effects, especially reducing the molar relation 2.53 mm and the overjet 2.56 mm. The EOA corrected the inclination and protrusion of incisors, although we cannot avoid the use of fixed appliances to round off. The reduction of 2.48 mm of facial convexity stands out as the most important skeletal effect; the facial depth angle increases 0.8 degree, and the maxillary depth decreases 1.16 degrees. The length of the mandibular corpus also increases 6.7 mm, although this change is mainly due to the growth of the patient. The changes in the aesthetic profile do not stand out

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, a claim has been made that executive decisions makers, far from being fact collectors, are actually fact users and integrators. As such, the claim continues, executive decision makers need help in understanding how to interpret facts, as well as guidance in making decisions in the absence of clear facts. This report justifies this claim against the backdrop of the modern history of decision making, from 1956 to the present. The historical excursion serves to amplify and clarify the claim, as well as to develop a theoretical framework for making decisions in the absence of clear facts. Two essential issues are identified that impact upon decision making in the absence of clear facts, as well criteria for decision making effectiveness under such circumstances. Methodological requirements and practical objectives round off the theoretical framework. The historical analysis enables the identification of one particular established methodology as claiming to meet the methodological requirements of the theoretical framework. Since the methodology has yet to be tried on information-poor situations, a further project is proposed that will test its validity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Se aborda en este trabajo el marco jurídico de las indemnizaciones que corresponden a los titulares de bienes y derechos distintos del suelo, para revisar a continuación la casuística que la jurisprudencia ha desarrollado sobre el resarcimiento de los daños y perjuicios ocasionados a los titulares de actividades económicas cuando éstas deben trasladarse por resultar incompatibles con la ejecución del planeamiento. Bajo la rúbrica de «traslado de industria», la jurisprudencia ha reconocido distintos supuestos indemnizatorios que, sin constituir un catálogo cerrado, permiten la identificación de los elementos comunes a la mayor parte de las situaciones juzgadas. The legal context of the compensations that holders of properties, and different rights are entitled, is approached in this work, revising next the casuistry that jurisprudence has developed about compensation of damages caused to holders of financial activities when these turned out incompatible with the execution of plans and must be moved. Jurisprudence has admitted different data of compensation under the round off «moving industry», which enable to identify the common interests of the most part of the judged situations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä työssä tutkittiin eräässä nosturipatentissa esitettyjä innovatiivisia pääkannatinratkaisuja suurissa telakkapukkinostureissa. Erityisesti keskityttiin tuulen paineesta tuleviin kuormituksiin ja niiden pienentämiseen. Käytännössä haluttiin selvittää kuinka nosturin rakennelaskelmissa käytettävää tuulen vastuskerrointa voitaisiin pienentää ja onko patentissa esitetty rakenne toteuttamiskelpoinen. Työssä myös vertaillaan yhden ja kahden pääkannattimen vahvuuksia sekä heikkouksia, koska Konecranes monien kilpailijoiden sijaan suosii yhdenpääkannattimen ratkaisua suurissa telakkapukkinostureissa. Innovatiivisessa telakkapukkinosturi patentissa nosturi on ideoitu kahdenpääkannattimen nosturiksi, jossa alavaunu on sijoitettu pääkannattimien väliin. Pääkannattimeen on suunniteltu erilliset hyllyt sekä ylä- että alavaunun kiskoille, joilla vaunut kulkevat. Patentissa esitettiin, että pääkannattimeen tehtävillä pyöristyksillä voitaisiin tuulen vastuskerrointa pienentää merkittävästi. Lujuuslaskelmien avulla saatiin selvitettyä pääkannattimen koko sekä painoarvio, jos käytettäisiin patentin esittämiä ratkaisuja. Näin saatiin selkeästi painavampi pääkannatin verrattuna nykyiseen Konecranesin käyttämään ratkaisuun. Lisäksi patentin esittämää profiili sekä Konecranesin nykyisin käyttämälle pääkannattimen profiilille tehtiin virtausanalyysit. Virtausanalyysin tuloksista ei voitu selvästi todeta, kumpi pääkannatin profiileista olisi parempi ratkaisu. Pyöristykset eivät näyttäneet tuovan selvää hyötyä vastuskertoimen pienentämiseen. Jatkotutkimuksena profiilien pienoismalleille tulisi tehdä tuulitunnelikokeet, jolla saataisiin tarkat sekä vertailu kelpoiset muotokertoimet tutkittaville pääkannattimen profiileille.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quantitative and qualitative analyses of planktonic foraminiferal assemblages from 134 core-top sediment samples collected along the western Iberian margin were used to assess the latitudinal and longitudinal changes in surface water conditions and to calibrate a Sea Surface Temperature (SST) transfer function for this seasonal coastal upwelling region. Q-mode factor analysis performed on relative abundances yielded three factors that explain 96% of the total variance: factor 1 (50%) is exclusively defined by Globigerina bulloides, the most abundant and widespread species, and reflects the modern seasonal (May to September) coastal upwelling areas; factor 2 (32%) is dominated by Neogloboquadrina pachyderma (dextral) and Globorotalia inflata and seems to be associated with the Portugal Current, the descending branch of the North Atlantic Drift; factor 3 (14%) is defined by the tropical-sub-tropical species Globigerinoides ruber (white), Globigerinoides trilobus trilobus, and G. inflata and mirrors the influence of the winter-time eastern branch of the Azores Current. In conjunction with satellite-derived SST for summer and winter seasons integrated over an 18 year period the regional foraminiferal data set is used to calibrate a SST transfer function using Imbrie & Kipp, MAT and SIMMAX(ndw) techniques. Similar predicted errors (RMSEP), correlation coefficients, and residuals' deviation from SST estimated for both techniques were observed for both seasons. All techniques appear to underestimate SST off the southern Iberia margin, an area mainly occupied by warm waters where upwelling occurs only occasionally, and overestimate SST on the northern part of the west coast of the Iberia margin, where cold waters are present nearly all year round. The comparison of these regional calibrations with former Atlantic and North Atlantic calibrations for two cores, one of which is influenced by upwelling, reveals that the regional one attests more robust paleo-SSTs than for the other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.