980 resultados para Missing values structures


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crowd induced dynamic loading in large structures, such as gymnasiums or stadiums, is usually modelled as a series of harmonic loads which are defined in terms of their Fourier coefficients. Different values of these Fourier coefficients that were obtained from full scale measurements can be found in codes. Recently, an alternative has been proposed, based on random generation of load time histories that take into account phase lags among individuals inside the crowd. Generally the testing is performed on platforms or structures that can be considered rigid because their natural frequencies are higher than the excitation frequencies associated with crowd loading. In this paper we shall present the testing done on a structure designed to be a gymnasium, which has natural frequencies within that range. In this test the gym slab was instrumented with acceleration sensors and different people jumped on a force plate installed on the floor. Test results have been compared with predictions based on the two abovementioned load modelling alternatives and a new methodology for modelling jumping loads has been proposed in order to reduce the difference between experimental and numerical results at high frequency range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new automatic evaluation for on-line graphics, its application and the numerous advantages achieved applying this developed correcting method. The software application developed by the Innovation in Education Group “E4”, from the Technical University of Madrid, is oriented for the online self-assessment of the graphic drawings that students carry out as continuous training. The adaptation to the European Higher Educational Area is an important opportunity to research about the possibilities of on-line education assessment. In this way, a new software tool has been developed for continuous self-testing by undergraduates. Using this software it is possible to evaluate the graphical answer of the students. Thus, the drawings made on-line by students are automatically corrected according to the geometry (straight lines, sloping lines or second order curves) and by sizes (depending on the specific values which define the graphics).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents analysis and discussion of the b- and ib-values calculated from the acoustic emission (AE) signals recorded during dynamic shake-table tests conducted on a reinforced concrete (RC) frame subjected to several uniaxial seismic simulations of increasing intensity until collapse. The intensity of shaking was controlled by the peak acceleration applied to the shake-table in each seismic simulation, and it ranged from 0.08 to 0.47 times the acceleration of gravity. The numerous spurious signals not related to concrete damage that inevitably contaminate AE measurements obtained from complex dynamic shake-table tests were properly filtered with an RMS filter and the use of guard sensors. Comparing the b- and ib-values calculated through the tests with the actual level of macro-cracking and damage observed during testing, it was concluded that the limit value of 0.05 proposed in previous research to determine the onset of macro-cracks should be revised in the case of earthquake-type dynamic loading. Finally, the b- and ibvalues were compared with the damage endured by the RC frame evaluated both visually and quantitatively in terms of the inter-story drift index.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The carbonation of concrete or the chlorides ingress in such quantity to reach the level of bars is triggers of reinforcement corrosion. One of the most significant effects of reinforcing steel corrosion on reinforced concrete structures is the decline in the ductility-related properties of the steel. Reinforcement ductility has a decisive effect on the overall ductility of reinforced concrete structures. Different Codes classify the type of steel depending on their ductility defined by the minimum values of several parameters. Using indicators of ductility associating different properties can be advantageous on many occasions. It is considered necessary to define the ductility by means of a single parameter that considers strength values and deformation simultaneously. There are a number of criteria for defining steel ductility by a single parameter. The present experimental study addresses the variation in the ductility of concrete-embedded steel bars when exposed to accelerated corrosion. This paper analyzes the suitability of a new indicator of ductility used in corroded bars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A hierarchy of residue density assessments and packing properties in protein structures are contrasted, including a regular density, a variety of charge densities, a hydrophobic density, a polar density, and an aromatic density. These densities are investigated by alternative distance measures and also at the interface of multiunit structures. Amino acids are divided into nine structural categories according to three secondary structure states and three solvent accessibility levels. To take account of amino acid abundance differences across protein structures, we normalize the observed density by the expected density defining a density index. Solvent accessibility levels exert the predominant influence in determinations of the regular residue density. Explicitly, the regular density values vary approximately linearly with respect to solvent accessibility levels, the linearity parameters depending on the amino acid. The charge index reveals pronounced inequalities between lysine and arginine in their interactions with acidic residues. The aromatic density calculations in all structural categories parallel the regular density calculations, indicating that the aromatic residues are distributed as a random sample of all residues. Moreover, aromatic residues are found to be over-represented in the neighborhood of all amino acids. This result might be attributed to nucleation sites and protein stability being substantially associated with aromatic residues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an approach to the high-resolution three-dimensional structural determination of macromolecules that utilizes ultrashort, intense x-ray pulses to record diffraction data in combination with direct phase retrieval by the oversampling technique. It is shown that a simulated molecular diffraction pattern at 2.5-Å resolution accumulated from multiple copies of single rubisco biomolecules, each generated by a femtosecond-level x-ray free electron laser pulse, can be successfully phased and transformed into an accurate electron density map comparable to that obtained by more conventional methods. The phase problem is solved by using an iterative algorithm with a random phase set as an initial input. The convergence speed of the algorithm is reasonably fast, typically around a few hundred iterations. This approach and phasing method do not require any ab initio information about the molecule, do not require an extended ordered lattice array, and can tolerate high noise and some missing intensity data at the center of the diffraction pattern. With the prospects of the x-ray free electron lasers, this approach could provide a major new opportunity for the high-resolution three-dimensional structure determination of single biomolecules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question of whether proteins originate from random sequences of amino acids is addressed. A statistical analysis is performed in terms of blocked and random walk values formed by binary hydrophobic assignments of the amino acids along the protein chains. Theoretical expectations of these variables from random distributions of hydrophobicities are compared with those obtained from functional proteins. The results, which are based upon proteins in the SWISS-PROT data base, convincingly show that the amino acid sequences in proteins differ from what is expected from random sequences in a statistically significant way. By performing Fourier transforms on the random walks, one obtains additional evidence for nonrandomness of the distributions. We have also analyzed results from a synthetic model containing only two amino acid types, hydrophobic and hydrophilic. With reasonable criteria on good folding properties in terms of thermodynamical and kinetic behavior, sequences that fold well are isolated. Performing the same statistical analysis on the sequences that fold well indicates similar deviations from randomness as for the functional proteins. The deviations from randomness can be interpreted as originating from anticorrelations in terms of an Ising spin model for the hydrophobicities. Our results, which differ from some previous investigations using other methods, might have impact on how permissive with respect to sequence specificity protein folding process is-only sequences with nonrandom hydrophobicity distributions fold well. Other distributions give rise to energy landscapes with poor folding properties and hence did not survive the evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied the radial structure of the stellar mass surface density (μ∗) and stellar population age as a function of the total stellar mass and morphology for a sample of 107 galaxies from the CALIFA survey. We applied the fossil record method based on spectral synthesis techniques to recover the star formation history (SFH), resolved in space and time, in spheroidal and disk dominated galaxies with masses from 10^9 to 10^12 M_⊙. We derived the half-mass radius, and we found that galaxies are on average 15% more compact in mass than in light. The ratio of half-mass radius to half-light radius (HLR) shows a dual dependence with galaxy stellar mass; it decreases with increasing mass for disk galaxies, but is almost constant in spheroidal galaxies. In terms of integrated versus spatially resolved properties, we find that the galaxy-averaged stellar population age, stellar extinction, and μ_∗ are well represented by their values at 1 HLR. Negative radial gradients of the stellar population ages are present in most of the galaxies, supporting an inside-out formation. The larger inner (≤1 HLR) age gradients occur in the most massive (10^11 M_⊙) disk galaxies that have the most prominent bulges; shallower age gradients are obtained in spheroids of similar mass. Disk and spheroidal galaxies show negative μ∗ gradients that steepen with stellar mass. In spheroidal galaxies, μ∗ saturates at a critical value (~7 × 10^2 M_⊙/pc^2 at 1 HLR) that is independent of the galaxy mass. Thus, all the massive spheroidal galaxies have similar local μ_∗ at the same distance (in HLR units) from the nucleus. The SFH of the regions beyond 1 HLR are well correlated with their local μ_∗, and follow the same relation as the galaxy-averaged age and μ_∗; this suggests that local stellar mass surface density preserves the SFH of disks. The SFH of bulges are, however, more fundamentally related to the total stellar mass, since the radial structure of the stellar age changes with galaxy mass even though all the spheroid dominated galaxies have similar radial structure in μ_∗. Thus, galaxy mass is a more fundamental property in spheroidal systems, while the local stellar mass surface density is more important in disks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mythical and religious belief systems in a social context can be regarded as a conglomeration of sacrosanct rites, which revolve around substantive values that involve an element of faith. Moreover, we can conclude that ideologies, myths and beliefs can all be analyzed in terms of systems within a cultural context. The significance of being able to define ideologies, myths and beliefs as systems is that they can figure in cultural explanations. This, in turn, means that such systems can figure in logic-mathematical analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Factor Markets Working Paper describes and highlights the key issues of farm capital structures, the dynamics of investments and accumulation of farm capital, and the financial leverage and borrowing rates on farms in selected European countries. Data collected from the Farm Account Data Network (FADN) suggest that the European farming sector uses quite different farm business strategies, capabilities to generate capital revenues, and segmented agricultural loan market regimes. Such diverse business strategies have substantial, and perhaps more substantial than expected, implications for the financial leverage and performance of farms. Different countries adopt different approaches to evaluating agricultural assets, or the agricultural asset markets simply differ substantially depending on the country in question. This has implications for most of the financial indicators. In those countries that have seen rapidly increasing asset prices at the margin, which were revised accordingly in the accounting systems for the whole stock of assets, firm values increased significantly, even though the firms had been disinvesting. If there is an asset price bubble and it bursts, there may be serious knock-on effects for some countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the nodule field of the Peru Basin, situated south of the zone of high bioproductivity, a relatively high flux of biogenic matter explains a distinct redox boundary at about 10 cm depth separating very soft oxic surface sediments from stiffer suboxic sediments. Maximum abundance (50 kg/m**2) of diagenetic nodules is found near the calcite compensation depth (CCD), currently at 4250 m. There, the accretion rate of nodules is much higher (100 mm/Ma) than on ridges (5 mm/Ma). Highest accretion rates are found at the bottom of large nodules that repeatedly sink to a level immediately above the redox boundary. There, distinct diagenetic growth conditions prevail and layers of dense laminated Mn oxide of very pure todorokite are formed. The layering of nodules is mainly the result of organisms moving nodules within the oxic surface sediment from diagenetic to hydrogenetic environments. The frequency of such movements is much higher than that of climatic changes. Two types of nodule burial occur in the Peru Basin. Large nodules are less easily moved by organisms and become buried. Consequently, buried nodules generally are larger than surface nodules. This type of burial predominates in basins. At ridges where smaller nodules prevail, burial is mainly controlled by statistical selection where some nodules are not moved up by organisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Example problems and methods of data analysis, together with general observations, are given. Smooth-slope runup results for both breaking and nonbreaking waves are presented in a set of curves similar to but revised from those in the Shore Protection Manual (SPM) (U.S. Army, Corps of Engineerings, Coastal Engineering Research Center, 1977). The curves are for structure slopes fronted by horizontal and 1 on 10 bottom slopes. The range of values of d sub s/H' sub o was extended to d sub s/H' sub o = 8; relative depth (d sub s/H' sub o) is important even for d sub s/H' sub o> 3 for waves which do not break on the structure slope. Rough-slope results are presented in similar curves if sufficient data were available. Otherwise, results are given as values of r, which is the ratio of rough-slope runup to smooth-slope runup. Scale-effect in runup is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultem irradiated up to 10.0 MGy has been analysed using C-13, H-1 and D-2 proton-carbon and proton-proton correlation NMR spectroscopy to shed light on the formation of new structures. Chemical shifts and correlation data were used to determine the structure or partial structures of several new components. The spectra indicated the presence of new groups and structures involving the isopropylidene group, the imide ring, and hydrogen-abstraction reactions. Possible pathways for formation of the new structures are proposed and the G-values for their formation have been estimated. (C) 2003 Elsevier Science Ltd. All rights reserved.