919 resultados para hierarchical generalized linear model
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
This study aims to explore the position of diffusion oriented support mechanisms in European Community (EC) innovation policy. With the shift from the traditional linear model towards an integrative approach to innovation, the role of diffusion of technologies and knowledge, achieved greater weight. This shift in both the thinking of academic experts, and of national policy makers, induced EC policy makers to appeal for similar changes in Community innovation policy. From the mid-1980s, the Commission of the European Communities, the key actor in EC policy making, thought to move its innovation policy away from the traditional science push approach. This study shows that in the implementation of programmes for research, technology and innovation, the traditional linear model is still dominant. The core research and technological development programmes still operate from a science push concept of innovation, mainly due to their pre-competitive nature. The case of SPRINT illustrates that policy programmes with an integrated innovation perspective can be successful at Community level. However the programme operates in a relatively isolated position from overall research and technological development policy. The case of BRITE-EURAM illustrates the difficulties of collaborative research programmes, the bulk of EC support mechanisms, to move away from the traditional model. The study shows how conflicting policy objectives arising from the different policy networks that shape EC policy making, in combination with a lack of co-ordination in those policy domains, hinder the emergence of the integrated approach. Consequently EC diffusion policy, implemented from the perspective of the linear model, will have a sub-optimal impact on the competitiveness of European industries.
Resumo:
This paper introduces a new mathematical method for improving the discrimination power of data envelopment analysis and to completely rank the efficient decision-making units (DMUs). Fuzzy concept is utilised. For this purpose, first all DMUs are evaluated with the CCR model. Thereafter, the resulted weights for each output are considered as fuzzy sets and are then converted to fuzzy numbers. The introduced model is a multi-objective linear model, endpoints of which are the highest and lowest of the weighted values. An added advantage of the model is its ability to handle the infeasibility situation sometimes faced by previously introduced models.
Resumo:
Objectives: Recently, pattern recognition approaches have been used to classify patterns of brain activity elicited by sensory or cognitive processes. In the clinical context, these approaches have been mainly applied to classify groups of individuals based on structural magnetic resonance imaging (MRI) data. Only a few studies have applied similar methods to functional MRI (fMRI) data. Methods: We used a novel analytic framework to examine the extent to which unipolar and bipolar depressed individuals differed on discrimination between patterns of neural activity for happy and neutral faces. We used data from 18 currently depressed individuals with bipolar I disorder (BD) and 18 currently depressed individuals with recurrent unipolar depression (UD), matched on depression severity, age, and illness duration, and 18 age- and gender ratio-matched healthy comparison subjects (HC). fMRI data were analyzed using a general linear model and Gaussian process classifiers. Results: The accuracy for discriminating between patterns of neural activity for happy versus neutral faces overall was lower in both patient groups relative to HC. The predictive probabilities for intense and mild happy faces were higher in HC than in BD, and for mild happy faces were higher in HC than UD (all p < 0.001). Interestingly, the predictive probability for intense happy faces was significantly higher in UD than BD (p = 0.03). Conclusions: These results indicate that patterns of whole-brain neural activity to intense happy faces were significantly less distinct from those for neutral faces in BD than in either HC or UD. These findings indicate that pattern recognition approaches can be used to identify abnormal brain activity patterns in patient populations and have promising clinical utility as techniques that can help to discriminate between patients with different psychiatric illnesses.
Resumo:
We investigate the energy optimization (minimization) for amplified links. We show that using the using a well-established analytic nonlinear signal-to-noise ratio noise model that for a simple amplifier model there are very clear, fiber independent, amplifier gains which minimize the total energy requirement. With a generalized amplifier model we establish the spacing for the optimum power per bit as well as the nonlinear limited optimum power. An amplifier spacing corresponding to 13 dB gain is shown to be a suitable compromise for practical amplifiers operating at the optimum nonlinear power. © 2014 Optical Society of America.
Resumo:
Traditional wave kinetics describes the slow evolution of systems with many degrees of freedom to equilibrium via numerous weak non-linear interactions and fails for very important class of dissipative (active) optical systems with cyclic gain and losses, such as lasers with non-linear intracavity dynamics. Here we introduce a conceptually new class of cyclic wave systems, characterized by non-uniform double-scale dynamics with strong periodic changes of the energy spectrum and slow evolution from cycle to cycle to a statistically steady state. Taking a practically important example—random fibre laser—we show that a model describing such a system is close to integrable non-linear Schrödinger equation and needs a new formalism of wave kinetics, developed here. We derive a non-linear kinetic theory of the laser spectrum, generalizing the seminal linear model of Schawlow and Townes. Experimental results agree with our theory. The work has implications for describing kinetics of cyclical systems beyond photonics.
Resumo:
Evidence of the relationship between altered cognitive function and depleted Fe status is accumulating in women of reproductive age but the degree of Fe deficiency associated with negative neuropsychological outcomes needs to be delineated. Data are limited regarding this relationship in university women in whom optimal cognitive function is critical to academic success. The aim of the present study was to examine the relationship between body Fe, in the absence of Fe-deficiency anaemia, and neuropsychological function in young college women. Healthy, non-Anaemic undergraduate women (n 42) provided a blood sample and completed a standardised cognitive test battery consisting of one manual (Tower of London (TOL), a measure of central executive function) and five computerised (Bakan vigilance task, mental rotation, simple reaction time, immediate word recall and two-finger tapping) tasks. Women's body Fe ranged from - 4·2 to 8·1 mg/kg. General linear model ANOVA revealed a significant effect of body Fe on TOL planning time (P= 0.002). Spearman's correlation coefficients showed a significant inverse relationship between body Fe and TOL planning time for move categories 4 (r - 0.39, P= 0.01) and 5 (r - 0.47, P= 0.002). Performance on the computerised cognitive tasks was not affected by body Fe level. These findings suggest that Fe status in the absence of anaemia is positively associated with central executive function in otherwise healthy college women. Copyright © The Authors 2012.
Resumo:
The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.
Resumo:
Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.
Resumo:
Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
Chronic disease affects 80% of adults over the age of 65 and is expected to increase in prevalence. To address the burden of chronic disease, self-management programs have been developed to increase self-efficacy and improve quality of life by reducing or halting disease symptoms. Two programs that have been developed to address chronic disease are the Chronic Disease Self-Management Program (CDSMP) and Tomando Control de su Salud (TCDS). CDSMP and TCDS both focus on improving participant self-efficacy, but use different curricula, as TCDS is culturally tailored for the Hispanic population. Few studies have evaluated the effectiveness of CDSMP and TCDS when translated to community settings. In addition, little is known about the correlation between demographic, baseline health status, and psychosocial factors and completion of either CDSMP or TCDS. This study used secondary data collected by agencies of the Healthy Aging Regional Collaborative from 10/01/2008–12/31/2010. The aims of this study were to examine six week differences in self-efficacy, time spent performing physical activity, and social/role activity limitations, and to identify correlates of program completion using baseline demographic and psychosocial factors. To examine if differences existed a general linear model was used. Additionally, logistic regression was used to examine correlates of program completion. Study findings show that all measures showed improvement at week six. For CDSMP, self-efficacy to manage disease (p = .001), self-efficacy to manage emotions (p = .026), social/role activities limitations (p = .001), and time spent walking (p = .008) were statistically significant. For TCDS, self-efficacy to manage disease (p = .006), social/role activities limitations (p = .001), and time spent walking (p = .016) and performing other aerobic activity (p = .005) were significant. For CDSMP, no correlates predicting program completion were found to be significant. For TCDS, participants who were male (OR=2.3, 95%CI: 1.15–4.66), from Broward County (OR=2.3, 95%CI: 1.27–4.25), or living alone (OR=2.0, 95%CI: 1.29-–3.08) were more likely to complete the program. CDSMP and TCDS, when implemented through a collaborative effort, can result in improvements for participants. Effective chronic disease management can improve health, quality of life, and reduce health care expenditures among older adults.
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
The purpose of the study was to determine the degree of relationships among GRE scores, undergraduate GPA (UGPA), and success in graduate school, as measured by first year graduate GPA (FGPA), cumulative graduate GPA, and degree attainment status. A second aim of the study was to determine whether the relationships between the composite predictor (GRE scores and UGPA) and the three success measures differed by race/ethnicity and sex. A total of 7,367 graduate student records (masters, 5,990; doctoral: 1,377) from 2000 to 2010 were used to evaluate the relationships among GRE scores, UGPA and the three success measures. Pearson's correlation, multiple linear and logistic regression, and hierarchical multiple linear and logistic regression analyses were performed to answer the research questions. The results of the correlational analyses differed by degree level. For master's students, the ETS proposed prediction that GRE scores are valid predictors of first year graduate GPA was supported by the findings from the present study; however, for doctoral students, the proposed prediction was only partially supported. Regression and correlational analyses indicated that UGPA was the variable that consistently predicted all three success measures for both degree levels. The hierarchical multiple linear and logistic regression analyses indicated that at master's degree level, White students with higher GRE Quantitative Reasoning Test scores were more likely to attain a degree than Asian Americans, while International students with higher UGPA were more likely to attain a degree than White students. The relationships between the three predictors and the three success measures were not significantly different between men and women for either degree level. Findings have implications both for practice and research. They will provide graduate school administrators with institution-specific validity data for UGPA and the GRE scores, which can be referenced in making admission decisions, while they will provide empirical and professionally defensible evidence to support the current practice of using UGPA and GRE scores for admission considerations. In addition, new evidence relating to differential predictions will be useful as a resource reference for future GRE validation researchers.
Resumo:
Prior to 2000, there were less than 1.6 million students enrolled in at least one online course. By fall 2010, student enrollment in online distance education showed a phenomenal 283% increase to 6.1 million. Two years later, this number had grown to 7.1 million. In light of this significant growth and skepticism about quality, there have been calls for greater oversight of this format of educational delivery. Accrediting bodies tasked with this oversight have developed guidelines and standards for online education. There is a lack of empirical studies that examine the relationship between accrediting standards and student success. The purpose of this study was to examine the relationship between the presence of Southern Association of Colleges and Schools Commission on College (SACSCOC) standards for online education in online courses, (a) student support services and (b) curriculum and instruction, and student success. An original 24-item survey with an overall reliability coefficient of .94 was administered to students (N=464) at Florida International University, enrolled in 24 university-wide undergraduate online courses during fall 2014, who rated the presence of these standards in their online courses. The general linear model was utilized to analyze the data. The results of the study indicated that the two standards, student support services and curriculum and instruction were both significantly and positively correlated with student success but with small R2 and strengths of association less than .35 and .20 respectively. Mixed results were produced from Chi-square tests for differences in student success between higher and lower rated online courses when controlling for various covariates such as discipline, gender, race/ethnicity, GPA, age, and number of online courses previously taken. A multiple linear regression analysis revealed that the curriculum and instruction standard was the only variable that accounted for a significant amount of unique variance in student success. Another regression test revealed that no significant interaction effect exists between the two SACSCOC standards and GPA in predicting student success. The results of this study are useful for administrators, faculty, and researchers who are interested in accreditation standards for online education and how these standards relate to student success.