906 resultados para Incidence function model
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
This research quantitatively evaluates the water retention capacity and flood control function of the forest catchments by using hydrological data of the large flood events which happened after the serious droughts. The objective sites are the Oodo Dam and the Sameura Dam catchments in Japan. The kinematic wave model, which considers saturated and unsaturated sub-surface soil zones, is used for the rainfall-runoff analysis. The result shows that possible storage volume of the Oodo Dam catchment is 162.26 MCM in 2005, while that of Samerua is 102.83 MCM in 2005 and 102.64 MCM in 2007. Flood control function of the Oodo Dam catchment is 173 mm in water depth in 2005, while the Sameura Dam catchment 114 mm in 2005 and 126 mm in 2007. This indicates that the Oodo Dam catchment has more than twice as big water capacity as its capacity (78.4 mm), while the Sameura Dam catchment has about one-fifth of the its storage capacity (693 mm).
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
The application of Discriminant function analysis (DFA) is not a new idea in the study of tephrochrology. In this paper, DFA is applied to compositional datasets of two different types of tephras from Mountain Ruapehu in New Zealand and Mountain Rainier in USA. The canonical variables from the analysis are further investigated with a statistical methodology of change-point problems in order to gain a better understanding of the change in compositional pattern over time. Finally, a special case of segmented regression has been proposed to model both the time of change and the change in pattern. This model can be used to estimate the age for the unknown tephras using Bayesian statistical calibration
Resumo:
Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades <-1 to >9) were separated, and each fraction was analysed for its chemical composition. The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø <8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar). To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend. Key words: sediment, geochemistry, grain size, regression, step function
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 different compositional datasets and modelled the first canonical variable using a segmented regression model solely based on an observation about the scatter plots. In this paper, multiple linear regressions are applied to different datasets to confirm the validity of our proposed model. In addition to dating the unknown tephras by calibration as discussed previously, another method of mapping the unknown tephras into samples of the reference set or missing samples in between consecutive reference samples is proposed. The application of these methodologies is demonstrated with both simulated and real datasets. This new proposed methodology provides an alternative, more acceptable approach for geologists as their focus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age of unknown tephra. Kew words: Tephrochronology; Segmented regression
Resumo:
This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields
Resumo:
This paper presents a control strategy for blood glucose(BG) level regulation in type 1 diabetic patients. To design the controller, model-based predictive control scheme has been applied to a newly developed diabetic patient model. The controller is provided with a feedforward loop to improve meal compensation, a gain-scheduling scheme to account for different BG levels, and an asymmetric cost function to reduce hypoglycemic risk. A simulation environment that has been approved for testing of artificial pancreas control algorithms has been used to test the controller. The simulation results show a good controller performance in fasting conditions and meal disturbance rejection, and robustness against model–patient mismatch and errors in meal estimation
Resumo:
La Fibrosis Quística es la enfermedad autosómica recesiva mas frecuente en caucásicos. En Colombia no se conoce la incidencia de la enfermedad, pero investigaciones del grupo de la Universidad del Rosario indican que podría ser relativamente alta. Objetivo: Determinar la incidencia de afectados por Fibrosis Quística en una muestra de recién nacidos de la ciudad de Bogotá. Metodología: Se analizan 8.297 muestras de sangre de cordón umbilical y se comparan tres protocolos de tamizaje neonatal: TIR/TIR, TIR/DNA y TIR/DNA/TIR. Resultados: El presente trabajo muestra una incidencia de 1 en 8.297 afectados en la muestra analizada. Conclusiones: Dada la relativamente alta incidencia demostrada en Bogotá, se justifica la implementación de Tamizaje Neonatal para Fibrosis Quística en Colombia.
Resumo:
La posible asociación entre el desarrollo de fibrilación auricular (FA) con la presencia de cardiopatía chagásica en una población portadora de dispositivos cardiacos de estimulación no está descrita. Se presenta un estudio de tipo cohorte retrospectivo realizado en la FCI que recopila las principales características clínicas de una población de pacientes con cardiopatía de variada etiología y portadores de dispositivos cardiacos buscando evaluar la incidencia de FA en presencia de cardiomiopatía de origen chagásico y no chagásico. A la fecha no se cuenta con una base de datos institucional ni regional que contenga las variables analizadas. Durante los 5 meses que duró la construcción de la base de datos se incluyeron 99 sujetos de investigación. Se implantaron 42 marcapasos bicamerales, 39 cardiodesfibriladores bicamerales, 6 dispositivos correspondientes cardiodesfibrilador con función de resincronización cardiaca, 2 resincronizadores cardiacos sin función de cardiodesfibrilador y 7 cardiodesfibriladores unicamerales. De los 99 sujetos recolectados se presentaron 8 desenlaces (FA de novo) y de esos solamente 1 pertenece al grupo de pacientes con cardiomiopatía chagásica. Este número reducido de desenlaces no permitió desarrollar un modelo de regresión de Cox y ni otros tipos de análisis estadísticos planteados en el protocolo inicial debido al bajo número de casos y pobre poder estadístico. Esta dificultad es inherente a la naturaleza del problema a estudiar y al corto tiempo de seguimiento. Por lo anterior no se puede establecer si existe una relación entre la presencia de serología positiva para infección por T. Cruzi y la presencia de FA de novo.
Resumo:
In Metropolitan Area of Mexico City, most of urban displacements happen through semi formal public transportation: small and medium capacity vehicles operated by small private enterprises, through a concession scheme. This kind of public transportation has been playing a major role in the Mexican capital. On one hand, it has been one of the conditions for urbanization to be possible. On the other hand, despite its uncountable deficiencies, public transportation has allowed for a long time the whole population to be able to move within this huge metropolis. However, that important function with regards to integration has now reached its limits in the most recent suburbs of the city, where a new mode of urbanization is taking place, based on massive production of very big social housing gated settlements. Public transportation tends to constitute here a factor of exclusion and households meet with important difficulties for their daily mobility.
Resumo:
Objetivos: Determinar la prevalencia y los factores asociados con el desarrollo de hipotiroidismo autoinmune (HA) en una cohorte de pacientes con lupus eritematoso sistémico (LES), y analizar la información actual en cuanto a la prevalencia e impacto de la enfermedad tiroidea autoinmune y la autoinmunidad tiroidea en pacientes con LES. Métodos: Este fue un estudio realizado en dos pasos. Primero, un total de 376 pacientes con LES fueron evaluados sistemáticamente por la presencia de: 1) HA confirmado, 2) positividad para anticuerpos tiroperoxidasa/tiroglobulina (TPOAb/TgAb) sin hipotiroidismo, 3) hipotiroidismo no autoinmune, y 4) pacientes con LES sin hipotiroidismo ni positividad para TPOAb/TgAb. Se construyeron modelos multivariados y árboles de regresión y clasificación para analizar los datos. Segundo, la información actual fue evaluada a través de una revisión sistemática de la literatura (RLS). Se siguieron las guías PRISMA para la búsqueda en las bases de datos PubMed, Scopus, SciELO y Librería Virtual en Salud. Resultados: En nuestra cohorte, la prevalencia de HA confirmado fue de 12% (Grupo 1). Sin embargo, la frecuencia de positividad para TPOAb y TgAb fue de 21% y 10%, respectivamente (Grupo 2). Los pacientes con LES sin HA, hipotiroidismo no autoinmune ni positividad para TPOAb/TgAb constituyeron el 40% de la corhorte. Los pacientes con HA confirmada fueron estadísticamente significativo de mayor edad y tuvieron un inicio tardío de la enfermedad. El tabaquismo (ORA 6.93, IC 95% 1.98-28.54, p= 0.004), la presencia de Síndrome de Sjögren (SS) (ORA 23.2, IC 95% 1.89-359.53, p= 0.015) y la positividad para anticuerpos anti-péptido cíclico citrulinado (anti-CCP) (ORA 10.35, IC 95% 1.04-121.26, p= 0.047) se asociaron con la coexistencia de LES-HA, ajustado por género y duración de la enfermedad. El tabaquismo y el SS fueron confirmados como factores predictivos para LES-HA (AUC del modelo CART = 0.72). En la RSL, la prevalencia de ETA en LES varío entre 1% al 60%. Los factores asociados con esta poliautoinmunidad fueron el género femenino, edad avanzada, tabaquismo, positividad para algunos anticuerpos, SS y el compromiso articular y cutáneo. Conclusiones: La ETA es frecuente en pacientes con LES, y no afecta la severidad del LES. Los factores de riesgo identificados ayudarán a los clínicos en la búsqueda de ETA. Nuestros resultados deben estimular políticas para la suspensión del tabaquismo en pacientes con LES.
Resumo:
This paper proposes a simple Ordered Probit model to analyse the monetary policy reaction function of the Colombian Central Bank. There is evidence that the reaction function is asymmetric, in the sense that the Bank increases the Bank rate when the gap between observed inflation and the inflation target (lagged once) is positive, but it does not reduce the Bank rate when the gap is negative. This behaviour suggests that the Bank is more interested in fulfilling the announced inflation target rather than in reducing inflation excessively. The forecasting performance of the model, both within and beyond the estimation period, appears to be particularly good.