959 resultados para generalized canonical correlation analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Intravascular ultrasound of drug-eluting stent (DES) thrombosis (ST) reveals a high incidence of incomplete stent apposition (ISA) and vessel remodeling. Autopsy specimens of DES ST show delayed healing and hypersensitivity reactions. The present study sought to correlate histopathology of thrombus aspirates with intravascular ultrasound findings in patients with very late DES ST. METHODS AND RESULTS: The study population consisted of 54 patients (28 patients with very late DES ST and 26 controls). Of 28 patients with very late DES ST, 10 patients (1020+/-283 days after implantation) with 11 ST segments (5 sirolimus-eluting stents, 5 paclitaxel-eluting stents, 1 zotarolimus-eluting stent) underwent both thrombus aspiration and intravascular ultrasound investigation. ISA was present in 73% of cases with an ISA cross-sectional area of 6.2+/-2.4 mm(2) and evidence of vessel remodeling (index, 1.6+/-0.3). Histopathological analysis showed pieces of fresh thrombus with inflammatory cell infiltrates (DES, 263+/-149 white blood cells per high-power field) and eosinophils (DES, 20+/-24 eosinophils per high-power field; sirolimus-eluting stents, 34+/-28; paclitaxel-eluting stents, 6+/-6; P for sirolimus-eluting stents versus paclitaxel-eluting stents=0.09). The mean number of eosinophils per high-power field was higher in specimens from very late DES ST (20+/-24) than in those from spontaneous acute myocardial infarction (7+/-10), early bare-metal stent ST (1+/-1), early DES ST (1+/-2), and late bare-metal stent ST (2+/-3; P from ANOVA=0.038). Eosinophil count correlated with ISA cross-sectional area, with an average increase of 5.4 eosinophils per high-power field per 1-mm(2) increase in ISA cross-sectional area. CONCLUSIONS: Very late DES thrombosis is associated with histopathological signs of inflammation and intravascular ultrasound evidence of vessel remodeling. Compared with other causes of myocardial infarction, eosinophilic infiltrates are more common in thrombi harvested from very late DES thrombosis, particularly in sirolimus-eluting stents, and correlate with the extent of stent malapposition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite long-standing calls for patient-focused research on individuals with generalized anxiety spectrum disorder there is little systematized knowledge about the in-session behaviors of these patients. The primary objective of this study was to describe of in-session trajectories of the patients' level of explication (as an indicator of an elaborated exposure of negative emotionality) and the patients' focus on their own resources and how these trajectories are associated with post-treatment outcome. In respect to GAD patients, a high level of explication might be seen as an indicator of successful exposure of avoided negative emotionality during therapy sessions. Observers made minute-by-minute ratings of 1100 minutes of video of 20 patients-therapists dyads. The results indicated that a higher level of explication generally observed at a later stage during the therapy sessions and the patients' focus on competencies at an early stage was highly associated with positive therapy outcome at assessment at post treatment, independent of pretreatment distress, rapid response of well-being and symptom reduction, as well as the therapists' professional experience and therapy lengths. These results will be discussed under the perspective of emotion regulation of patients and therapist's counterregulation. It is assumed that GAD-Patients are especially skilled in masking difficult emotions. Explication level and emotion regulation are important variables for this patient group but there's relation to outcome is different.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: To compare assessment capabilities of a motion analysis tool against a validated checklist during laparoscopic training.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Understanding spatial distributions and how environmental conditions influence catch-per-unit-effort (CPUE) is important for increased fishing efficiency and sustainable fisheries management. This study investigated the relationship between CPUE, spatial factors, temperature, and depth using generalized additive models. Combinations of factors, and not one single factor, were frequently included in the best model. Parameters which best described CPUE varied by geographic region. The amount of variance, or deviance, explained by the best models ranged from a low of 29% (halibut, Charlotte region) to a high of 94% (sablefish, Charlotte region). Depth, latitude, and longitude influenced most species in several regions. On the broad geographic scale, depth was associated with CPUE for every species, except dogfish. Latitude and longitude influenced most species, except halibut (Areas 4 A/D), sablefish, and cod. Temperature was important for describing distributions of halibut in Alaska, arrowtooth flounder in British Columbia, dogfish, Alaska skate, and Aleutian skate. The species-habitat relationships revealed in this study can be used to create improved fishing and management strategies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Exercise brachial blood pressure ( BP) predicts mortality, but because of wave reflection, central ( ascending aortic) pressure differs from brachial pressure. Exercise central BP may be clinically important, and a noninvasive means to derive it would be useful. The purpose of this study was to test the validity of a noninvasive technique to derive exercise central BP. Ascending aortic pressure waveforms were recorded using a micromanometer-tipped 6F Millar catheter in 30 patients (56 +/- 9 years; 21 men) undergoing diagnostic coronary angiography. Simultaneous recordings of the derived central pressure waveform were acquired using servocontrolled radial tonometry at rest and during supine cycling. Pulse wave analysis of the direct and derived pressure signals was performed offline (SphygmoCor 7.01). From rest to exercise, mean arterial pressure and heart rate were increased by 20 +/- 10 mm Hg and 15 +/- 7 bpm, respectively, and central systolic BP ranged from 77 to 229 mm Hg. There was good agreement and high correlation between invasive and noninvasive techniques with a mean difference (+/- SD) for central systolic BP of -1.3 +/- 3.2 mm Hg at rest and -4.7 +/- 3.3 mm Hg at peak exercise ( for both r=0.995; P < 0.001). Conversely, systolic BP was significantly higher peripherally than centrally at rest (155 +/- 33 versus 138 +/- 32mm Hg; mean difference, -16.3 +/- 9.4mm Hg) and during exercise (180 +/- 34 versus 164 +/- 33 mm Hg; mean difference, -15.5 +/- 10.4 mm Hg; for both P < 0.001). True myocardial afterload is not reliably estimated by peripheral systolic BP. Radial tonometry and pulse wave analysis is an accurate technique for the noninvasive determination of central BP at rest and during exercise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We estimated trophic position and carbon source for three consumers (Florida gar, Lepisosteus platyrhincus; eastern mosquitofish, Gambusia holbrooki; and riverine grass shrimp, Palaemonetes paludosus) from 20 sites representing gradients of productivity and hydrological disturbance in the southern Florida Everglades, U.S.A. We characterized gross primary productivity at each site using light/dark bottle incubation and stem density of emergent vascular plants. We also documented nutrient availability as total phosphorus (TP) in floc and periphyton, and the density of small fishes. Hydrological disturbance was characterized as the time since a site was last dried and the average number of days per year the sites were inundated for the previous 10 years. Food-web attributes were estimated in both the wet and dry seasons by analysis of δ15N (trophic position) and δ13C (food-web carbon source) from 702 samples of aquatic consumers. An index of carbon source was derived from a two-member mixing model with Seminole ramshorn snails (Planorbella duryi) as a basal grazing consumer and scuds (amphipods Hyallela azteca) as a basal detritivore. Snails yielded carbon isotopic values similar to green algae and diatoms, while carbon values of scuds were similar to bulk periphyton and floc; carbon isotopic values of cyanobacteria were enriched in C13compared to all consumers examined. A carbon source similar to scuds dominated at all but one study site, and though the relative contribution of scud-like and snail-like carbon sources was variable, there was no evidence that these contributions were a function of abiotic factors or season. Gar consistently displayed the highest estimated trophic position of the consumers studied, with mosquitofish feeding at a slightly lower level, and grass shrimp feeding at the lowest level. Trophic position was not correlated with any nutrient or productivity parameter, but did increase for grass shrimp and mosquitofish as the time following droughts increased. Trophic position of Florida gar was positively correlated with emergent plant stem density.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research analyzed the spatial relationship between a mega-scale fracture network and the occurrence of vegetation in an arid region. High-resolution aerial photographs of Arches National Park, Utah were used for digital image processing. Four sets of large-scale joints were digitized from the rectified color photograph in order to characterize the geospatial properties of the fracture network with the aid of a Geographic Information System. An unsupervised landcover classification was carried out to identify the spatial distribution of vegetation on the fractured outcrop. Results of this study confirm that the WNW-ESE alignment of vegetation is dominantly controlled by the spatial distribution of the systematic joint set, which in turn parallels the regional fold axis. This research provides insight into the spatial heterogeneity inherent to fracture networks, as well as the effects of jointing on the distribution of surface vegetation in desert environments.