994 resultados para GENERALIZED POISSON STRUCTURES
Resumo:
In this paper, we proposed a flexible cure rate survival model by assuming the number of competing causes of the event of interest following the Conway-Maxwell distribution and the time for the event to follow the generalized gamma distribution. This distribution can be used to model survival data when the hazard rate function is increasing, decreasing, bathtub and unimodal-shaped including some distributions commonly used in lifetime analysis as particular cases. Some appropriate matrices are derived in order to evaluate local influence on the estimates of the parameters by considering different perturbations, and some global influence measurements are also investigated. Finally, data set from the medical area is analysed.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
As Earth's climate is rapidly changing, the impact of ambient temperature on health outcomes has attracted increasing attention in the recent time. Considerable number of excess deaths has been reported because of exposure to ambient hot and cold temperatures. However, relatively little research has been conducted on the relation between temperature and morbidity. The aim of this study was to characterize the relationship between both hot and cold temperatures and emergency hospital admissions in Brisbane, Australia, and to examine whether the relation varied by age and socioeconomic factors. It aimed to explore lag structures of temperature–morbidity association for respiratory causes, and to estimate the magnitude of emergency hospital admissions for cardiovascular diseases attributable to hot and cold temperatures for the large contribution of both diseases to the total emergency hospital admissions. A time series study design was applied using routinely collected data of daily emergency hospital admissions, weather and air pollution variables in Brisbane during 1996–2005. Poisson regression model with a distributed lag non-linear structure was adopted to assess the impact of temperature on emergency hospital admissions after adjustment for confounding factors. Both hot and cold effects were found, with higher risk of hot temperatures than that of cold temperatures. Increases in mean temperature above 24.2oC were associated with increased morbidity, especially for the elderly ≥ 75 years old with the largest effect. The magnitude of the risk estimates of hot temperature varied by age and socioeconomic factors. High population density, low household income, and unemployment appeared to modify the temperature–morbidity relation. There were different lag structures for hot and cold temperatures, with the acute hot effect within 3 days after hot exposure and about 2-week lagged cold effect on respiratory diseases. A strong harvesting effect after 3 days was evident for respiratory diseases. People suffering from cardiovascular diseases were found to be more vulnerable to hot temperatures than cold temperatures. However, more patients admitted for cardiovascular diseases were attributable to cold temperatures in Brisbane compared with hot temperatures. This study contributes to the knowledge base about the association between temperature and morbidity. It is vitally important in the context of ongoing climate change. The findings of this study may provide useful information for the development and implementation of public health policy and strategic initiatives designed to reduce and prevent the burden of disease due to the impact of climate change.
Resumo:
In the finite element modelling of steel frames, external loads usually act along the members rather than at the nodes only. Conventionally, when a member is subjected to these transverse loads, they are converted to nodal forces which act at the ends of the elements into which the member is discretised by either lumping or consistent nodal load approaches. For a contemporary geometrically non-linear analysis in which the axial force in the member is large, accurate solutions are achieved by discretising the member into many elements, which can produce unfavourable consequences on the efficacy of the method for analysing large steel frames. Herein, a numerical technique to include the transverse loading in the non-linear stiffness formulation for a single element is proposed, and which is able to predict the structural responses of steel frames involving the effects of first-order member loads as well as the second-order coupling effect between the transverse load and the axial force in the member. This allows for a minimal discretisation of a frame for second-order analysis. For those conventional analyses which do include transverse member loading, prescribed stiffness matrices must be used for the plethora of specific loading patterns encountered. This paper shows, however, that the principle of superposition can be applied to the equilibrium condition, so that the form of the stiffness matrix remains unchanged with only the magnitude of the loading being needed to be changed in the stiffness formulation. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. The results are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple structural frames.
Resumo:
The finite element method in principle adaptively divides the continuous domain with complex geometry into discrete simple subdomain by using an approximate element function, and the continuous element loads are also converted into the nodal load by means of the traditional lumping and consistent load methods, which can standardise a plethora of element loads into a typical numerical procedure, but element load effect is restricted to the nodal solution. It in turn means the accurate continuous element solutions with the element load effects are merely restricted to element nodes discretely, and further limited to either displacement or force field depending on which type of approximate function is derived. On the other hand, the analytical stability functions can give the accurate continuous element solutions due to element loads. Unfortunately, the expressions of stability functions are very diverse and distinct when subjected to different element loads that deter the numerical routine for practical applications. To this end, this paper presents a displacement-based finite element function (generalised element load method) with a plethora of element load effects in the similar fashion that never be achieved by the stability function, as well as it can generate the continuous first- and second-order elastic displacement and force solutions along an element without loss of accuracy considerably as the analytical approach that never be achieved by neither the lumping nor consistent load methods. Hence, the salient and unique features of this paper (generalised element load method) embody its robustness, versatility and accuracy in continuous element solutions when subjected to the great diversity of transverse element loads.
Resumo:
Selecting an appropriate working correlation structure is pertinent to clustered data analysis using generalized estimating equations (GEE) because an inappropriate choice will lead to inefficient parameter estimation. We investigate the well-known criterion of QIC for selecting a working correlation Structure. and have found that performance of the QIC is deteriorated by a term that is theoretically independent of the correlation structures but has to be estimated with an error. This leads LIS to propose a correlation information criterion (CIC) that substantially improves the QIC performance. Extensive simulation studies indicate that the CIC has remarkable improvement in selecting the correct correlation structures. We also illustrate our findings using a data set from the Madras Longitudinal Schizophrenia Study.
Resumo:
This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.
Resumo:
Analytical expressions are found for the coupled wavenumbers in an infinite fluid-filled cylindrical shell using the asymptotic methods. These expressions are valid for any general circumferential order (n).The shallow shell theory (which is more accurate at higher frequencies)is used to model the cylinder. Initially, the in vacua shell is dealt with and asymptotic expressions are derived for the shell wavenumbers in the high-and the low-frequency regimes. Next, the fluid-filled shell is considered. Defining a relevant fluid-loading parameter p, we find solutions for the limiting cases of small and large p. Wherever relevant, a frequency scaling parameter along with some ingenuity is used to arrive at an elegant asymptotic expression. In all cases.Poisson's ratio v is used as an expansion variable. The asymptotic results are compared with numerical solutions of the dispersion equation and the dispersion relation obtained by using the more general Donnell-Mushtari shell theory (in vacuo and fluid-filled). A good match is obtained. Hence, the contribution of this work lies in the extension of the existing literature to include arbitrary circumferential orders(n). (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Ligand-induced conformational changes in proteins are of immense functional relevance. It is a major challenge to elucidate the network of amino acids that are responsible for the percolation of ligand-induced conformational changes to distal regions in the protein from a global perspective. Functionally important subtle conformational changes (at the level of side-chain noncovalent interactions) upon ligand binding or as a result of environmental variations are also elusive in conventional studies such as those using root-mean-square deviations (r.m.s.d.s). In this article, the network representation of protein structures and their analyses provides an efficient tool to capture these variations (both drastic and subtle) in atomistic detail in a global milieu. A generalized graph theoretical metric, using network parameters such as cliques and/or communities, is used to determine similarities or differences between structures in a rigorous manner. The ligand-induced global rewiring in the protein structures is also quantified in terms of network parameters. Thus, a judicious use of graph theory in the context of protein structures can provide meaningful insights into global structural reorganizations upon perturbation and can also be helpful for rigorous structural comparison. Data sets for the present study include high-resolution crystal structures of serine proteases from the S1A family and are probed to quantify the ligand-induced subtle structural variations.
Resumo:
The paper examines the suitability of the generalized data rule in training artificial neural networks (ANN) for damage identification in structures. Several multilayer perceptron architectures are investigated for a typical bridge truss structure with simulated damage stares generated randomly. The training samples have been generated in terms of measurable structural parameters (displacements and strains) at suitable selected locations in the structure. Issues related to the performance of the network with reference to hidden layers and hidden. neurons are examined. Some heuristics are proposed for the design of neural networks for damage identification in structures. These are further supported by an investigation conducted on five other bridge truss configurations.
Resumo:
Analytical expressions are found for the wavenumbers and resonance frequencies in flexible, orthotropic shells using the asymptotic methods. These expressions are valid for arbitrary circumferential orders n. The Donnell-Mushtari shell theory is used to model the dynamics of the cylindrical shell. Initially, an in vacuo cylindrical isotropic shell is considered and expressions for all the wavenumbers (bending, near-field bending, longitudinal and torsional) are found. Subsequently, defining a suitable orthotropy parameter epsilon, the problem of wave propagation in an orthotropic shell is posed as a perturbation on the corresponding problem for an isotropic shell. Asymptotic expressions for the wavenumbers in the in vacuo orthotropic shell are then obtained by treating epsilon as an expansion parameter. In both cases (isotropy and orthotropy), a frequency-scaling parameter (eta) and Poisson's ratio (nu) are used to find elegant expansions in the different frequency regimes. The asymptotic expansions are compared with numerical solutions in each of the cases and the match is found to be good. The main contribution of this work lies in the extension of the existing literature by developing closed-form expressions for wavenumbers with arbitrary circumferential orders n in the case of both, isotropic and orthotropic shells. Finally, we present natural frequency expressions in finite shells (isotropic and orthotropic) for the axisymmetric mode and compare them with numerical and ANSYS results. Here also, the comparison is found to be good. (C) 2011 Elsevier Ltd. All rights reserved.
Binaural Signal Processing Motivated Generalized Analytic Signal Construction and AM-FM Demodulation
Resumo:
Binaural hearing studies show that the auditory system uses the phase-difference information in the auditory stimuli for localization of a sound source. Motivated by this finding, we present a method for demodulation of amplitude-modulated-frequency-modulated (AM-FM) signals using a ignal and its arbitrary phase-shifted version. The demodulation is achieved using two allpass filters, whose impulse responses are related through the fractional Hilbert transform (FrHT). The allpass filters are obtained by cosine-modulation of a zero-phase flat-top prototype halfband lowpass filter. The outputs of the filters are combined to construct an analytic signal (AS) from which the AM and FM are estimated. We show that, under certain assumptions on the signal and the filter structures, the AM and FM can be obtained exactly. The AM-FM calculations are based on the quasi-eigenfunction approximation. We then extend the concept to the demodulation of multicomponent signals using uniform and non-uniform cosine-modulated filterbank (FB) structures consisting of flat bandpass filters, including the uniform cosine-modulated, equivalent rectangular bandwidth (ERB), and constant-Q filterbanks. We validate the theoretical calculations by considering application on synthesized AM-FM signals and compare the performance in presence of noise with three other multiband demodulation techniques, namely, the Teager-energy-based approach, the Gabor's AS approach, and the linear transduction filter approach. We also show demodulation results for real signals.
Resumo:
Using Generalized Gradient Approximation (GGA) and meta-GGA density functional methods, structures, binding energies and harmonic vibrational frequencies for the clusters O-4(+), O-6(+), O-8(+) and O-10(+) have been calculated. The stable structures of O-4(+), O-6(+), O-8(+) and O-10(+) have point groups D-2h, D-3h, D-4h, and D-5h optimized on the quartet, sextet, octet and dectet potential energy surfaces, respectively. Rectangular (D-2h) O-4(+) has been found to be more stable compared to trans-planar (C-2h) on the quartet potential energy surface. Cyclic structure (D-3h) of CA cluster ion has been calculated to be more stable than other structures. Binding energy (B.E.) of the cyclic O-6(+) is in good agreement with experimental measurement. The zero-point corrected B.E. of O-8(+) with D4h symmetry on the octet potential energy surface and zero-point corrected B.E. of O-10(+) with D-5h symmetry on the dectet potential energy surface are also in good agreement with experimental values. The B.E. value for O-4(+) is close to the experimental value when single point energy is calculated by Brueckner coupled-cluster method, BD(T). (C) 2014 Elsevier B.V. All rights reserved.