878 resultados para regression discrete models
Resumo:
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
We consider the electron dynamics and transport properties of one-dimensional continuous models with random, short-range correlated impurities. We develop a generalized Poincare map formalism to cast the Schrodinger equation for any potential into a discrete set of equations, illustrating its application by means of a specific example. We then concentrate on the case of a Kronig-Penney model with dimer impurities. The previous technique allows us to show that this model presents infinitely many resonances (zeroes of the reflection coefficient at a single dimer) that give rise to a band of extended states, in contradiction with the general viewpoint that all one-dimensional models with random potentials support only localized states. We report on exact transfer-matrix numerical calculations of the transmission coefFicient, density of states, and localization length for various strengths of disorder. The most important conclusion so obtained is that this kind of system has a very large number of extended states. Multifractal analysis of very long systems clearly demonstrates the extended character of such states in the thermodynamic limit. In closing, we brieBy discuss the relevance of these results in several physical contexts.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
Authors discuss the effects that economic crises generate on the global market shares of tourism destinations, through a series of potential transmission mechanisms based on the main economic competitiveness determinants identified in the previous literature using a non-linear approach. Specifically a Markov Switching Regression approach is used to estimate the effect of two basic transmission mechanisms: reductions of internal and external tourism demands and falling investment.
Resumo:
This thesis focuses on tectonic geomorphology and the response of the Ken River catchment to postulated tectonic forcing along a NE-striking monocline fold in the Panna region, Madhya Pradesh, India. Peninsular India is underlain by three northeast-trending paleotopographic ridges of Precambrian Indian basement, bounded by crustal-scale faults. Of particular interest is the Pokhara lineament, a crustal scale fault that defines the eastern edge of the Faizabad ridge, a paleotopographic high cored by the Archean Bundelkhand craton. The Pokhara lineament coincides with the monocline structure developed in the Proterozoic Vindhyan Supergroup rocks along the Bundelkhand cratonic margin. A peculiar, deeply incised meander-like feature, preserved along the Ken River where it flows through the monocline, may be intimately related to the tectonic regime of this system. This thesis examines 41 longitudinal stream profiles across the length of the monocline structure to identify any tectonic signals generated from recent surface uplift above the Pokhara lineament. It also investigates the evolution of the Ken River catchment in response to the generation of the monocline fold. Digital Elevation Models (DEM) from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were used to delineate a series of tributary watersheds and extract individual stream profiles which were imported into MATLAB for analysis. Regression limits were chosen to define distinct channel segments, and knickpoints were defined at breaks between channel segments where there was a discrete change in the steepness of the channel profile. The longitudinal channel profiles exhibit the characteristics of a fluvial system in transient state. There is a significant downstream increase in normalized steepness index in the channel profiles, as well as a general increase in concavity downstream, with some channels exhibiting convex, over-steepened segments. Normalized steepness indices and uppermost knickpoint elevations are on average much higher in streams along the southwest segment of the monocline compared to streams along the northeast segment. Most channel profiles have two to three knickpoints, predominantly exhibiting slope-break morphology. These data have important implications for recent surface uplift above the Pokhara lineament. Furthermore, geomorphic features preserved along the Ken River suggest that it is an antecedent river. The incised meander-like feature appears to be the abandoned river valley of a former Ken River course that was captured during the evolution of the landscape by what is the present day Ken River.
Resumo:
In typical theoretical or experimental studies of heat migration in discrete fractures, conduction and thermal dispersion are commonly neglected from the fracture heat transport equation, assuming heat conduction into the matrix is predominant. In this study analytical and numerical models are used to investigate the significance of conduction and thermal dispersion in the plane of the fracture for a point and line sources geometries. The analytical models account for advective, conductive and dispersive heat transport in both the longitudinal and transverse directions in the fracture. The heat transport in the fracture is coupled with a matrix equation in which heat is conducted in the direction perpendicular to the fracture. In the numerical model, the governing heat transport processes are the same as the analytical models; however, the matrix conduction is considered in both longitudinal and transverse directions. Firstly, we demonstrate that longitudinal conduction and dispersion are critical processes that affect heat transport in fractured rock environments, especially for small apertures (eg. 100 μm or less), high flow rate conditions (eg. velocity greater than 50 m/day) and early time (eg. less than 10 days). Secondly, transverse thermal dispersion in the fracture plane is also observed to be an important transport process leading to retardation of the migrating heat front particularly at late time (eg. after 40 days of hot water injection). Solutions which neglect dispersion in the transverse direction underestimate the locations of heat fronts at late time. Finally, this study also suggests that the geometry of the heat sources has significant effects on the heat transport in the system. For example, the effects of dispersion in the fracture are observed to decrease when the width of the heat source expands.
Resumo:
The organizational structure of the companies in the biomass energy sector, regarding the supply chain management services, can be greatly improved through the use of software decision support tools. These tools should be able to provide real-time alternative scenarios when deviations from the initial production plans are observed. To make this possible it is necessary to have representative production chain process models where several scenarios and solutions can be evaluated accurately. Due to its nature, this type of process is more adequately represented by means of event-based models. In particular, this work presents the modelling of a typical biomass production chain using the computing platform SIMEVENTS. Throughout the article details about the conceptual model, as well as simulation results, are provided
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Finite mixture regression model with random effects: application to neonatal hospital length of stay
Resumo:
A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Preventive maintenance actions over the warranty period have an impact on the warranty servicing cost to the manufacturer and the cost to the buyer of fixing failures over the life of the product after the warranty expires. However, preventive maintenance costs money and is worthwhile only when these costs exceed the reduction in other costs. The paper deals with a model to determine when preventive maintenance actions (which rejuvenate the unit) carried out at discrete time instants over the warranty period are worthwhile. The cost of preventive maintenance is borne by the buyer. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We investigate whether relative contributions of genetic and shared environmental factors are associated with an increased risk in melanoma. Data from the Queensland Familial Melanoma Project comprising 15,907 subjects arising from 1912 families were analyzed to estimate the additive genetic, common and unique environmental contributions to variation in the age at onset of melanoma. Two complementary approaches for analyzing correlated time-to-onset family data were considered: the generalized estimating equations (GEE) method in which one can estimate relationship-specific dependence simultaneously with regression coefficients that describe the average population response to changing covariates; and a subject-specific Bayesian mixed model in which heterogeneity in regression parameters is explicitly modeled and the different components of variation may be estimated directly. The proportional hazards and Weibull models were utilized, as both produce natural frameworks for estimating relative risks while adjusting for simultaneous effects of other covariates. A simple Markov Chain Monte Carlo method for covariate imputation of missing data was used and the actual implementation of the Bayesian model was based on Gibbs sampling using the free ware package BUGS. In addition, we also used a Bayesian model to investigate the relative contribution of genetic and environmental effects on the expression of naevi and freckles, which are known risk factors for melanoma.
Resumo:
Background and Objective: To examine if commonly recommended assumptions for multivariable logistic regression are addressed in two major epidemiological journals. Methods: Ninety-nine articles from the Journal of Clinical Epidemiology and the American Journal of Epidemiology were surveyed for 10 criteria: six dealing with computation and four with reporting multivariable logistic regression results. Results: Three of the 10 criteria were addressed in 50% or more of the articles. Statistical significance testing or confidence intervals were reported in all articles. Methods for selecting independent variables were described in 82%, and specific procedures used to generate the models were discussed in 65%. Fewer than 50% of the articles indicated if interactions were tested or met the recommended events per independent variable ratio of 10: 1. Fewer than 20% of the articles described conformity to a linear gradient, examined collinearity, reported information on validation procedures, goodness-of-fit, discrimination statistics, or provided complete information on variable coding. There was no significant difference (P >.05) in the proportion of articles meeting the criteria across the two journals. Conclusion: Articles reviewed frequently did not report commonly recommended assumptions for using multivariable logistic regression. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The aim of the study was to examine the relationships between Eysenck's primary personality factors and various aspects of religious orientation and practice. Some 400 UK undergraduates completed questionnaires constructed from the Batson and Schoenrade Religious Life Inventory (Batson & Schoenrade, 1991) and the Eysenck Personality Profiler (Eysenck, Barrett, Wilson, & Jackson, 1992). As is generally found, all the religious variables correlated negatively with the higher order personality factor of psychoticism. In contrast, among the primary factors, those associated with neuroticism appeared to be the strongest indicators of religiosity. In particular, all the primary traits classically linked to neuroticism correlate positively with the quest orientation. However, fewer primary traits predict religious behaviour in regression and of these, a sense of guilt is the greatest and a common predictor of extrinsic, intrinsic and quest religiosities. Upon factor analysis of the significant personality predictors together with the three religious orientations, the orientations formed a single discrete factor, which implies that extrinsic, intrinsic and quest religiosities have more in common with one another than with any of the personality traits included in the study. This suggests that religious awareness may itself be an important individual difference that is distinct from those generally associated with models of personality. (C) 2003 Elsevier Ltd. All rights reserved.