974 resultados para Quasi-Bilateral Generating Function
Resumo:
Introduction: The Trendelenburg Test (TT) is used to assess the functional strength of the hip abductor muscles (HABD), their ability to control frontal plane motion of the pelvis, and the ability of the lumbopelvic complex to transfer load into single leg stance. Rationale: Although a standard method to perform the test has been described for use within clinical populations, no study has directly investigated Trendelenburg’s hypotheses. Purpose: To investigate the validity of the TT using an ultrasound guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in HABD strength would result in the theorized mechanical compensatory strategies measured during the TT. Methods: Quasi-experimental design using a convenience sample of nine healthy males. Only subjects with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Force dynamometry was used to evaluation HABD strength (%BW). 2D mechanics were used to evaluate contralateral pelvic drop (cMPD), change in contralateral pelvic drop (∆cMPD), ipsilateral hip adduction (iHADD) and ipsilateral trunk sway (TRUNK) measured in degrees (°). All measures were collected prior to and following a UNB on the superior gluteal nerve performed by an interventional radiologist. Results: Subjects’ age was median 31yrs (IQR:22-32yrs); and weight was median 73kg (IQR:67-81kg). An average 52% reduction of HABD strength (z=2.36,p=0.02) resulted following the UNB. No differences were found in cMPD or ∆cMPD (z=0.01,p= 0.99, z=-0.67,p=0.49). Individual changes in biomechanics show no consistency between subjects and non-systematic changes across the group. One subject demonstrated the mechanical compensations described by Trendelenburg. Discussion: The TT should not be used as screening measure for HABD strength in populations demonstrating strength greater than 30%BW but reserved for use with populations with marked HABD weakness. Importance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.
Resumo:
Maternally inherited diabetes and deafness (MIDD) is an autosomal dominant inherited syndrome caused by the mitochondrial DNA (mtDNA) nucleotide mutation A3243G. It affects various organs including the eye with external ophthalmoparesis, ptosis, and bilateral macular pattern dystrophy.1, 2 The prevalence of retinal involvement in MIDD is high, with 50% to 85% of patients exhibiting some macular changes.1 Those changes, however, can vary between patients and within families dramatically based on the percentage of retinal mtDNA mutations, making it difficult to give predictions on an individual’s visual prognosis...
Resumo:
Reduced mismatch negativity (MMN) in response to auditory change is a well-established finding in schizophrenia and has been shown to be correlated with impaired daily functioning, rather than with hallmark signs and symptoms of the disorder. In this study, we investigated (1) whether the relationship between reduced MMN and impaired daily functioning is mediated by cortical volume loss in temporal and frontal brain regions in schizophrenia and (2) whether this relationship varies with the type of auditory deviant generating MMN. MMN in response to duration, frequency, and intensity deviants was recorded from 18 schizophrenia subjects and 18 pairwise age- and gender-matched healthy subjects. Patients’ levels of global functioning were rated on the Social and Occupational Functioning Assessment Scale. High-resolution structural magnetic resonance scans were acquired to generate average cerebral cortex and temporal lobe models using cortical pattern matching. This technique allows accurate statistical comparison and averaging of cortical measures across subjects, despite wide variations in gyral patterns. MMN amplitude was reduced in schizophrenia patients and correlated with their impaired day-to-day function level. Only in patients, bilateral gray matter reduction in Heschl’s gyrus, as well as motor and executive regions of the frontal cortex, correlated with reduced MMN amplitude in response to frequency deviants, while reduced gray matter in right Heschl’s gyrus also correlated with reduced MMN to duration deviants. Our findings further support the importance of MMN reduction in schizophrenia by linking frontotemporal cerebral gray matter pathology to an automatically generated event-related potential index of daily functioning.
Resumo:
The study is the first to analyze genetic and environmental factors that affect brain fiber architecture and its genetic linkage with cognitive function. We assessed white matter integrity voxelwise using diffusion tensor imaging at high magnetic field (4 Tesla), in 92 identical and fraternal twins. White matter integrity, quantified using fractional anisotropy (FA), was used to fit structural equation models (SEM) at each point in the brain, generating three-dimensional maps of heritability. We visualized the anatomical profile of correlations between white matter integrity and full-scale, verbal, and performance intelligence quotients (FIQ, VIQ, and PIQ). White matter integrity (FA) was under strong genetic control and was highly heritable in bilateral frontal (a 2 = 0.55, p = 0.04, left; a 2 = 0.74, p = 0.006, right), bilateral parietal (a 2 = 0.85, p < 0.001, left; a 2 = 0.84, p < 0.001, right), and left occipital (a 2 = 0.76, p = 0.003) lobes, and was correlated with FIQ and PIQ in the cingulum, optic radiations, superior fronto- occipital fasciculus, internal capsule, callosal isthmus, and the corona radiata (p = 0.04 for FIQ and p = 0.01 for PIQ, corrected for multiple comparisons). In a cross-trait mapping approach, common genetic factors mediated the correlation between IQ and white matter integrity, suggesting a common physiological mechanism for both, and common genetic determination. These genetic brain maps reveal heritable aspects of white matter integrity and should expedite the discovery of single-nucleotide polymorphisms affecting fiber connectivity and cognition.
Resumo:
In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.
Resumo:
Thin films are developed by dispersing carbon black nanoparticles and carbon nanotubes (CNTs) in an epoxy polymer. The films show a large variation in electrical resistance when subjected to quasi-static and dynamic mechanical loading. This phenomenon is attributed to the change in the band-gap of the CNTs due to the applied strain, and also to the change in the volume fraction of the constituent phases in the percolation network. Under quasi-static loading, the films show a nonlinear response. This nonlinearity in the response of the films is primarily attributed to the pre-yield softening of the epoxy polymer. The electrical resistance of the films is found to be strongly dependent on the magnitude and frequency of the applied dynamic strain, induced by a piezoelectric substrate. Interestingly, the resistance variation is found to be a linear function of frequency and dynamic strain. Samples with a small concentration of just 0.57% of CNT show a sensitivity as high as 2.5% MPa-1 for static mechanical loading. A mathematical model based on Bruggeman's effective medium theory is developed to better understand the experimental results. Dynamic mechanical loading experiments reveal a sensitivity as high as 0.007% Hz(-1) at a constant small-amplitude vibration and up to 0.13%/mu-strain at 0-500 Hz vibration. Potential applications of such thin films include highly sensitive strain sensors, accelerometers, artificial neural networks, artificial skin and polymer electronics.
Resumo:
We study by means of experiments and Monte Carlo simulations, the scattering of light in random media, to determine the distance up to which photons travel along almost undeviated paths within a scattering medium, and are therefore capable of casting a shadow of an opaque inclusion embedded within the medium. Such photons are isolated by polarisation discrimination wherein the plane of linear polarisation of the input light is continuously rotated and the polarisation preserving component of the emerging light is extracted by means of a Fourier transform. This technique is a software implementation of lock-in detection. We find that images may be recovered to a depth far in excess of that predicted by the diffusion theory of photon propagation. To understand our experimental results, we perform Monte Carlo simulations to model the random walk behaviour of the multiply scattered photons. We present a. new definition of a diffusing photon in terms of the memory of its initial direction of propagation, which we then quantify in terms of an angular correlation function. This redefinition yields the penetration depth of the polarisation preserving photons. Based on these results, we have formulated a model to understand shadow formation in a turbid medium, the predictions of which are in good agreement with our experimental results.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
We propose a simple method of constructing quasi-likelihood functions for dependent data based on conditional-mean-variance relationships, and apply the method to estimating the fractal dimension from box-counting data. Simulation studies were carried out to compare this method with the traditional methods. We also applied this technique to real data from fishing grounds in the Gulf of Carpentaria, Australia
Resumo:
The approach of generalized estimating equations (GEE) is based on the framework of generalized linear models but allows for specification of a working matrix for modeling within-subject correlations. The variance is often assumed to be a known function of the mean. This article investigates the impacts of misspecifying the variance function on estimators of the mean parameters for quantitative responses. Our numerical studies indicate that (1) correct specification of the variance function can improve the estimation efficiency even if the correlation structure is misspecified; (2) misspecification of the variance function impacts much more on estimators for within-cluster covariates than for cluster-level covariates; and (3) if the variance function is misspecified, correct choice of the correlation structure may not necessarily improve estimation efficiency. We illustrate impacts of different variance functions using a real data set from cow growth.
Resumo:
Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function that stems from the conditional mean-variance relationship. Unlike traditional QL approaches to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. A simulation study is carried out to examine the performance of the proposed QL method. Fish mortality data from quantal response experiments are used for illustration.
Resumo:
Irreversible, Pressure induced, quasicrystal-to-crystal transitions are observed for the first time in melt spun alloys at 4.9 GPa for Al 78 Mn22 and 9.3 GPa for Al86 Mn14 by monitoring the electrical resistivities of these alloys as a function of pressure. Electron diffraction and x-ray measurements are used to show that these quasicrystalline phases have icosohedral point group symmetry. The crystalline phases which appear at high pressures are identified as h.c.p. for Al78 Mn22 and orthorhombic for Al86 Mn14.
Resumo:
A new geometrical method for generating aperiodic lattices forn-fold non-crystallographic axes is described. The method is based on the self-similarity principle. It makes use of the principles of gnomons to divide the basic triangle of a regular polygon of 2n sides to appropriate isosceles triangles and to generate a minimum set of rhombi required to fill that polygon. The method is applicable to anyn-fold noncrystallographic axis. It is first shown how these regular polygons can be obtained and how these can be used to generate aperiodic structures. In particular, the application of this method to the cases of five-fold and seven-fold axes is discussed. The present method indicates that the recursion rule used by others earlier is a restricted one and that several aperiodic lattices with five fold symmetry could be generated. It is also shown how a limited array of approximately square cells with large dimensions could be detected in a quasi lattice and these are compared with the unit cell dimensions of MnAl6 suggested by Pauling. In addition, the recursion rule for sub-dividing the three basic rhombi of seven-fold structure was obtained and the aperiodic lattice thus generated is also shown.
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.
Resumo:
The purpose of this study is to analyse the development and understanding of the idea of consensus in bilateral dialogues among Anglicans, Lutherans and Roman Catholics. The source material consists of representative dialogue documents from the international, regional and national dialogues from the 1960s until 2006. In general, the dialogue documents argue for agreement/consensus based on commonality or compatibility. Each of the three dialogue processes has specific characteristics and formulates its argument in a unique way. The Lutheran-Roman Catholic dialogue has a particular interest in hermeneutical questions. In the early phases, the documents endeavoured to describe the interpretative principles that would allow the churches to together proclaim the Gospel and to identify the foundation on which the agreement in the church is based. This investigation ended up proposing a notion of basic consensus , which later developed into a form of consensus that seeks to embrace, not to dismiss differences (so-called differentiated consensus ). The Lutheran-Roman Catholic agreement is based on a perspectival understanding of doctrine. The Anglican-Roman Catholic dialogue emphasises the correctness of interpretations. The documents consciously look towards a common future , not the separated past. The dialogue s primary interpretative concept is koinonia. The texts develop a hermeneutics of authoritative teaching that has been described as the rule of communion . The Anglican-Lutheran dialogue is characterised by an instrumental understanding of doctrine. Doctrinal agreement is facilitated by the ideas of coherence, continuity and substantial emphasis in doctrine. The Anglican-Lutheran dialogue proposes a form of sufficient consensus that considers a wide set of doctrinal statements and liturgical practices to determine whether an agreement has been reached to the degree that, although not complete , is sufficient for concrete steps towards unity. Chapter V discusses the current challenges of consensus as an ecumenically viable concept. In this part, I argue that the acceptability of consensus as an ecumenical goal is based not only the understanding of the church but more importantly on the understanding of the nature and function of the doctrine. The understanding of doctrine has undergone significant changes during the time of the ecumenical dialogues. The major shift has been from a modern paradigm towards a postmodern paradigm. I conclude with proposals towards a way to construct a form of consensus that would survive philosophical criticism, would be theologically valid and ecumenically acceptable.