952 resultados para Chebyshev polynomial
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
An introduction to Legendre polynomials as precursor to studying angular momentum in quantum chemistry,
Resumo:
The Frobenius solution to the differential equations associated with the harmonic oscillator (QM) is carried out in detail.
Resumo:
A single-issue spatial election is a voter preference profile derived from an arrangement of candidates and voters on a line, with each voter preferring the nearer of each pair of candidates. We provide a polynomial-time algorithm that determines whether a given preference profile is a single-issue spatial election and, if so, constructs such an election. This result also has preference representation and mechanism design applications.
Resumo:
The radial part of the Schrodinger Equation for the H-atom's electron involves Laguerre polynomials, hence this introduction.
Resumo:
A standard treatment of aspects of Legendre polynomials is treated here, including the dipole moment expansion, generating functions, etc..
Resumo:
A characterization of a property of binary relations is of finite type if it is stated in terms of ordered T-tuples of alternatives for some positive integer T. A characterization of finite type can be used to determine in polynomial time whether a binary relation over a finite set has the property characterized. Unfortunately, Pareto representability in R2 has no characterization of finite type (Knoblauch, 2002). This result is generalized below Rl, l larger than 2. The method of proof is applied to other properties of binary relations.
Resumo:
The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^
Resumo:
Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Resumo:
Se aplicó un nuevo método para la evaluación objetiva del color en aceitunas de mesa, basado en el análisis de la intensidad de reflexión de cada uno de los colores primarios que componen la luz blanca (rojo, verde y azul), según las longitudes de onda del Sistema RGB. Se trabajó con programas informáticos para el análisis de imágenes digitales color tipo BMP de 24 bits. Este trabajo proporciona mayor información sobre el pardeamiento de las aceitunas naturales en salmuera, lo que sería muy útil para incrementar la efectividad del proceso. El método propuesto es rápido y no destructivo, prometiendo ser muy práctico ya que permite que una misma muestra pueda ser evaluada en el tiempo. Se investigaron los cambios de color en aceitunas elaboradas naturalmente, con diferentes grados de madurez (pintas, rojas y negras) y a diferentes valores de pH (3,6 - 4,0 - 4,5), expuestas al aire durante períodos crecientes de tiempo. Se cuantificó el grado de oscurecimiento a través de Índices de Intensidad de Reflexión. La evolución del índice de reflexión en función del tiempo generó una curva polinomial de 4° grado que reveló el comportamiento sigmoidal del fenómeno de pardeamiento enzimático, con la máxima correlación a las 8 horas de aireación. Esta función permitiría predecir el fenómeno de pardeamiento en las aceitunas negras y representa una medición objetiva del grado relativo de pardeamiento. La evolución del color rojo (λ = 700,0 nm) exhibió la mayor correlación con el proceso de pardeamiento. Las aceitunas rojas naturales a pH 4,5 presentaron óptimo pardeamiento. El espectro de reflexión para el color azul (λ = 435,8 nm) se sugiere como medida de la actividad de la enzima PPO (polifenoloxidasa).
Resumo:
The Palestine Exploration Fund (PEF) Survey of Western Palestine (1871-1877) is highly praised for its accuracy and completeness; the first systematic analysis of its planimetric accuracy was published by Levin (2006). To study the potential of these 1:63,360 maps for a quantitative analysis of land cover changes over a period of time, Levin has compared them to 20th century topographic maps. The map registration error of the PEF maps was 74.4 m using 123 control points of trigonometrical stations and a 1st order polynomial. The median RMSE of all control and test points (n = 1104) was 153.6 m. Following the georeferencing of each of the 26 sheets of the PEF maps of the Survey of Western Palestine, a mosaicked file has been created. Care should be taken when analysing historical maps, as it cannot be assumed that their accuracy is consistent at different parts or for different features depicted on them.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (ca. 50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color-encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kfn. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from ca. 700 dB/m at 400 kHz and a power of n = 1-2 in coarse-grained sands to few decibels per meter and n ? 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
A unique macroseismic data set for the strongest earthquakes occurred since 1940 in Vrancea region, is constructed by a thorough review of all available sources. Inconsistencies and errors in the reported data and in their use are analyzed as well. The final data set, free from inconsistencies, including those at the political borders, contains 9822 observations for the strong intermediate-depth earthquakes: 1940, Mw=7.7; 1977, Mw=7.4; 1986, Mw=7.1; 1990, May 30, Mw=6.9 and 1990, May 31, Mw=6.4; 2004, Mw=6.0. This data set is available electronically as supplementary data for the present paper. From the discrete macroseismic data the continuous macroseismic field is generated using the methodology developed by Molchan et al. (2002) that, along with the unconventional smoothing method Modified Polynomial Filtering (MPF), uses the Diffused Boundary (DB) method, which visualizes the uncertainty in the isoseismal's boundaries. The comparison of DBs with previous isoseismals maps represents a good evaluation criterion of the reliability of earlier published maps. The produced isoseismals can be used not only for the formal comparison between observed and theoretical isoseismals, but also for the retrieval of source properties and the assessment of local responses (Molchan et al., 2011).
Resumo:
Total sediment oxygen consumption rates (TSOC or Jtot), measured during sediment-water incubations, and sediment oxygen microdistributions were studied at 16 stations in the Arctic Ocean (Svalbard area). The oxygen consumption rates ranged between 1.85 and 11.2 mmol m**-2 d**-1, and oxygen penetrated from 5.0 to >59 mm into the investigated sediments. Measured TSOC exceeded the calculated diffusive oxygen fluxes (Jdiff) by 1.1-4.8 times. Diffusive fluxes across the sediment-water interface were calculated using the whole measured microprofiles, rather than the linear oxygen gradient in the top sediment layer. The lack of a significant correlation between found abundances of bioirrigating meiofauna and high Jtot/Jdiff ratios as well as minor discrepancies in measured TSOC between replicate sediment cores, suggest molecular diffusion, not bioirrigation, to be the most important transport mechanism for oxygen across the sediment-water interface and within these sediments. The high ratios of Jtot/Jdiff obtained for some stations were therefore suggested to be caused by topographic factors, i.e. underestimation of the actual sediment surface area when one-dimensional diffusive fluxes were calculated, or sampling artifacts during core recovery from great water depths. Measured TSOC correlated to water depth raised to the -0.4 to -0.5 power (TSOC = water depth**-0.4 to -0.5) for all investigated stations, but they could be divided into two groups representing different geographical areas with different sediment oxygen consumption characteristics. The differences in TSOC between the two areas were suggested to reflect hydrographic factors (such as ice coverage and import/production of reactive particulate organic material) related to the dominating water mass (Atlantic or polar) in each of the two areas. The good correlation between TSOC and water depth**-0.4 to -0.5 rules out any of the stations investigated to be topographic depressions with pronounced enhanced sediment oxygen consumption.