951 resultados para Univalent polynomial
Resumo:
Se aplicó un nuevo método para la evaluación objetiva del color en aceitunas de mesa, basado en el análisis de la intensidad de reflexión de cada uno de los colores primarios que componen la luz blanca (rojo, verde y azul), según las longitudes de onda del Sistema RGB. Se trabajó con programas informáticos para el análisis de imágenes digitales color tipo BMP de 24 bits. Este trabajo proporciona mayor información sobre el pardeamiento de las aceitunas naturales en salmuera, lo que sería muy útil para incrementar la efectividad del proceso. El método propuesto es rápido y no destructivo, prometiendo ser muy práctico ya que permite que una misma muestra pueda ser evaluada en el tiempo. Se investigaron los cambios de color en aceitunas elaboradas naturalmente, con diferentes grados de madurez (pintas, rojas y negras) y a diferentes valores de pH (3,6 - 4,0 - 4,5), expuestas al aire durante períodos crecientes de tiempo. Se cuantificó el grado de oscurecimiento a través de Índices de Intensidad de Reflexión. La evolución del índice de reflexión en función del tiempo generó una curva polinomial de 4° grado que reveló el comportamiento sigmoidal del fenómeno de pardeamiento enzimático, con la máxima correlación a las 8 horas de aireación. Esta función permitiría predecir el fenómeno de pardeamiento en las aceitunas negras y representa una medición objetiva del grado relativo de pardeamiento. La evolución del color rojo (λ = 700,0 nm) exhibió la mayor correlación con el proceso de pardeamiento. Las aceitunas rojas naturales a pH 4,5 presentaron óptimo pardeamiento. El espectro de reflexión para el color azul (λ = 435,8 nm) se sugiere como medida de la actividad de la enzima PPO (polifenoloxidasa).
Resumo:
The Palestine Exploration Fund (PEF) Survey of Western Palestine (1871-1877) is highly praised for its accuracy and completeness; the first systematic analysis of its planimetric accuracy was published by Levin (2006). To study the potential of these 1:63,360 maps for a quantitative analysis of land cover changes over a period of time, Levin has compared them to 20th century topographic maps. The map registration error of the PEF maps was 74.4 m using 123 control points of trigonometrical stations and a 1st order polynomial. The median RMSE of all control and test points (n = 1104) was 153.6 m. Following the georeferencing of each of the 26 sheets of the PEF maps of the Survey of Western Palestine, a mosaicked file has been created. Care should be taken when analysing historical maps, as it cannot be assumed that their accuracy is consistent at different parts or for different features depicted on them.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (ca. 50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color-encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kfn. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from ca. 700 dB/m at 400 kHz and a power of n = 1-2 in coarse-grained sands to few decibels per meter and n ? 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
A unique macroseismic data set for the strongest earthquakes occurred since 1940 in Vrancea region, is constructed by a thorough review of all available sources. Inconsistencies and errors in the reported data and in their use are analyzed as well. The final data set, free from inconsistencies, including those at the political borders, contains 9822 observations for the strong intermediate-depth earthquakes: 1940, Mw=7.7; 1977, Mw=7.4; 1986, Mw=7.1; 1990, May 30, Mw=6.9 and 1990, May 31, Mw=6.4; 2004, Mw=6.0. This data set is available electronically as supplementary data for the present paper. From the discrete macroseismic data the continuous macroseismic field is generated using the methodology developed by Molchan et al. (2002) that, along with the unconventional smoothing method Modified Polynomial Filtering (MPF), uses the Diffused Boundary (DB) method, which visualizes the uncertainty in the isoseismal's boundaries. The comparison of DBs with previous isoseismals maps represents a good evaluation criterion of the reliability of earlier published maps. The produced isoseismals can be used not only for the formal comparison between observed and theoretical isoseismals, but also for the retrieval of source properties and the assessment of local responses (Molchan et al., 2011).
Resumo:
Total sediment oxygen consumption rates (TSOC or Jtot), measured during sediment-water incubations, and sediment oxygen microdistributions were studied at 16 stations in the Arctic Ocean (Svalbard area). The oxygen consumption rates ranged between 1.85 and 11.2 mmol m**-2 d**-1, and oxygen penetrated from 5.0 to >59 mm into the investigated sediments. Measured TSOC exceeded the calculated diffusive oxygen fluxes (Jdiff) by 1.1-4.8 times. Diffusive fluxes across the sediment-water interface were calculated using the whole measured microprofiles, rather than the linear oxygen gradient in the top sediment layer. The lack of a significant correlation between found abundances of bioirrigating meiofauna and high Jtot/Jdiff ratios as well as minor discrepancies in measured TSOC between replicate sediment cores, suggest molecular diffusion, not bioirrigation, to be the most important transport mechanism for oxygen across the sediment-water interface and within these sediments. The high ratios of Jtot/Jdiff obtained for some stations were therefore suggested to be caused by topographic factors, i.e. underestimation of the actual sediment surface area when one-dimensional diffusive fluxes were calculated, or sampling artifacts during core recovery from great water depths. Measured TSOC correlated to water depth raised to the -0.4 to -0.5 power (TSOC = water depth**-0.4 to -0.5) for all investigated stations, but they could be divided into two groups representing different geographical areas with different sediment oxygen consumption characteristics. The differences in TSOC between the two areas were suggested to reflect hydrographic factors (such as ice coverage and import/production of reactive particulate organic material) related to the dominating water mass (Atlantic or polar) in each of the two areas. The good correlation between TSOC and water depth**-0.4 to -0.5 rules out any of the stations investigated to be topographic depressions with pronounced enhanced sediment oxygen consumption.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (-50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color -encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kF. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from -700 dB/m at 400 kHz and a power of n=1-2 in coarse-grained sands to few decibels per meter and n :s; 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
We provide explicit families of tame automorphisms of the complex affine three-space which degenerate to wild automorphisms. This shows that the tame subgroup of the group of polynomial automorphisms of C3 is not closed, when the latter is seen as an infinite-dimensional algebraic group.
Resumo:
La presente Tesis Doctoral aborda la aplicación de métodos meshless, o métodos sin malla, a problemas de autovalores, fundamentalmente vibraciones libres y pandeo. En particular, el estudio se centra en aspectos tales como los procedimientos para la resolución numérica del problema de autovalores con estos métodos, el coste computacional y la viabilidad de la utilización de matrices de masa o matrices de rigidez geométrica no consistentes. Además, se acomete en detalle el análisis del error, con el objetivo de determinar sus principales fuentes y obtener claves que permitan la aceleración de la convergencia. Aunque en la actualidad existe una amplia variedad de métodos meshless en apariencia independientes entre sí, se han analizado las diferentes relaciones entre ellos, deduciéndose que el método Element-Free Galerkin Method [Método Galerkin Sin Elementos] (EFGM) es representativo de un amplio grupo de los mismos. Por ello se ha empleado como referencia en este análisis. Muchas de las fuentes de error de un método sin malla provienen de su algoritmo de interpolación o aproximación. En el caso del EFGM ese algoritmo es conocido como Moving Least Squares [Mínimos Cuadrados Móviles] (MLS), caso particular del Generalized Moving Least Squares [Mínimos Cuadrados Móviles Generalizados] (GMLS). La formulación de estos algoritmos indica que la precisión de los mismos se basa en los siguientes factores: orden de la base polinómica p(x), características de la función de peso w(x) y forma y tamaño del soporte de definición de esa función. Se ha analizado la contribución individual de cada factor mediante su reducción a un único parámetro cuantificable, así como las interacciones entre ellos tanto en distribuciones regulares de nodos como en irregulares. El estudio se extiende a una serie de problemas estructurales uni y bidimensionales de referencia, y tiene en cuenta el error no sólo en el cálculo de autovalores (frecuencias propias o carga de pandeo, según el caso), sino también en términos de autovectores. This Doctoral Thesis deals with the application of meshless methods to eigenvalue problems, particularly free vibrations and buckling. The analysis is focused on aspects such as the numerical solving of the problem, computational cost and the feasibility of the use of non-consistent mass or geometric stiffness matrices. Furthermore, the analysis of the error is also considered, with the aim of identifying its main sources and obtaining the key factors that enable a faster convergence of a given problem. Although currently a wide variety of apparently independent meshless methods can be found in the literature, the relationships among them have been analyzed. The outcome of this assessment is that all those methods can be grouped in only a limited amount of categories, and that the Element-Free Galerkin Method (EFGM) is representative of the most important one. Therefore, the EFGM has been selected as a reference for the numerical analyses. Many of the error sources of a meshless method are contributed by its interpolation/approximation algorithm. In the EFGM, such algorithm is known as Moving Least Squares (MLS), a particular case of the Generalized Moving Least Squares (GMLS). The accuracy of the MLS is based on the following factors: order of the polynomial basis p(x), features of the weight function w(x), and shape and size of the support domain of this weight function. The individual contribution of each of these factors, along with the interactions among them, has been studied in both regular and irregular arrangement of nodes, by means of a reduction of each contribution to a one single quantifiable parameter. This assessment is applied to a range of both one- and two-dimensional benchmarking cases, and includes not only the error in terms of eigenvalues (natural frequencies or buckling load), but also of eigenvectors
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.
Resumo:
A research has been carried out in two-lanehighways in the Madrid Region to propose an alternativemodel for the speed-flowrelationship using regular loop data. The model is different in shape and, in some cases, slopes with respect to the contents of Highway Capacity Manual (HCM). A model is proposed for a mountainous area road, something for which the HCM does not provide explicitly a solution. The problem of a mountain road with high flows to access a popular recreational area is discussed, and some solutions are proposed. Up to 7 one-way sections of two-lanehighways have been selected, aiming at covering a significant number of different characteristics, to verify the proposed method the different classes of highways on which the Manual classifies them. In order to enunciate the model and to verify the basic variables of these types of roads a high number of data have been used. The counts were collected in the same way that the Madrid Region Highway Agency performs their counts. A total of 1.471 hours have been collected, in periods of 5 minutes. The models have been verified by means of specific statistical test (R2, T-Student, Durbin-Watson, ANOVA, etc.) and with the diagnostics of the contrast of assumptions (normality, linearity, homoscedasticity and independence). The model proposed for this type of highways with base conditions, can explain the different behaviors as traffic volumes increase, and follows a polynomial multiple regression model of order 3, S shaped. As secondary results of this research, the levels of service and the capacities of this road have been measured with the 2000 HCM methodology, and the results discussed. © 2011 Published by Elsevier Ltd.