61 resultados para Hazard-Based Models
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
In this paper we highlight the importance of the operational costs in explaining economic growth and analyze how the industrial structure affects the growth rate of the economy. If there is monopolistic competition only in an intermediate goods sector, then production growth coincides with consumption growth. Moreover, the pattern of growth depends on the particular form of the operational cost. If the monopolistically competitive sector is the final goods sector, then per capita production is constant but per capita effective consumption or welfare grows. Finally, we modify again the industrial structure of the economy and show an economy with two different growth speeds, one for production and another for effective consumption. Thus, both the operational cost and the particular structure of the sector that produces the final goods determines ultimately the pattern of growth.
Resumo:
In this paper we highlight the importance of the operational costs in explaining economic growth and analyze how the industrial structure affects the growth rate of the economy. If there is monopolistic competition only in an intermediate goods sector, then production growth coincides with consumption growth. Moreover, the pattern of growth depends on the particular form of the operational cost. If the monopolistically competitive sector is the final goods sector, then per capita production is constant but per capita effective consumption or welfare grows. Finally, we modify again the industrial structure of the economy and show an economy with two different growth speeds, one for production and another for effective consumption. Thus, both the operational cost and the particular structure of the sector that produces the final goods determines ultimately the pattern of growth.
Resumo:
Background: There is growing evidence that traffic-related air pollution reduces birth weight. Improving exposure assessment is a key issue to advance in this research area.Objective: We investigated the effect of prenatal exposure to traffic-related air pollution via geographic information system (GIS) models on birth weight in 570 newborns from the INMA (Environment and Childhood) Sabadell cohort.Methods: We estimated pregnancy and trimester-specific exposures to nitrogen dioxide and aromatic hydrocarbons [benzene, toluene, ethylbenzene, m/p-xylene, and o-xylene (BTEX)] by using temporally adjusted land-use regression (LUR) models. We built models for NO2 and BTEX using four and three 1-week measurement campaigns, respectively, at 57 locations. We assessed the relationship between prenatal air pollution exposure and birth weight with linear regression models. We performed sensitivity analyses considering time spent at home and time spent in nonresidential outdoor environments during pregnancy.Results: In the overall cohort, neither NO2 nor BTEX exposure was significantly associated with birth weight in any of the exposure periods. When considering only women who spent < 2 hr/day in nonresidential outdoor environments, the estimated reductions in birth weight associated with an interquartile range increase in BTEX exposure levels were 77 g [95% confidence interval (CI), 7–146 g] and 102 g (95% CI, 28–176 g) for exposures during the whole pregnancy and the second trimester, respectively. The effects of NO2 exposure were less clear in this subset.Conclusions: The association of BTEX with reduced birth weight underscores the negative role of vehicle exhaust pollutants in reproductive health. Time–activity patterns during pregnancy complement GIS-based models in exposure assessment.
Resumo:
Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.
Resumo:
We use a two-person 3-stage game to investigate whether people choose to punish or reward another player by sacrificing money to increase or decrease the other person’s payoff. One player sends a message indicating an intended play, which is either favorable or unfavorable to the other player in the game. After the message, the sender and the receiver play a simultaneous 2x2 game. A deceptive message may be made, in an effort to induce the receiver to make a play favorable to the sender. Our focus is on whether receivers’ rates of monetary sacrifice depend on the process and the perceived sender’s intention, as is suggested by the literature on deception and procedural satisfaction. Models such as Rabin (1993), Sen (1997), and Charness and Rabin (1999) also permit rates of sacrifice to be sensitive to the sender’s perceived intention, while outcome-based models such as Fehr and Schmidt (1999) and Bolton and Ockenfels (1997) predict otherwise. We find that deception substantially increases the punishment rate as a response to an action that is unfavorable to the receiver. We also find that a small but significant percentage of subjects choose to reward a favorable action choice made by the sender.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
We use a two-person 3-stage game to investigate whether people chooseto punish or reward another player by sacrificing money to increase or decrease the other person's payoff. One player sends a message indicating an intended play, which is either favorable or unfavorable to the other player in the game. After the message, the sender and the receiver play a simultaneous 2x2 game. A deceptive message may be made, in an effort to induce the receiver to make a play favorable to the sender. Our focus is on whether receivers' rates of monetary sacrifice depend on the process and the perceived sender's intention,as is suggested by the literature on deception and proceduralsatisfaction. Models such as Rabin (1993), Sen (1997), and Charnessand Rabin (1999) also permit rates of sacrifice to be sensitive to the sender's perceived intention, while outcome-based models such as Fehr and Schmidt (1999) and Bolton and Ockenfels (1997) predict otherwise. We find that deception substantially increases the punishment rate as a response to an action that is unfavorable to the receiver. We also find that a small but significant percentage of subjects choose to reward a favorable action choice made by the sender.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.
Resumo:
The prediction of rockfall travel distance below a rock cliff is an indispensable activity in rockfall susceptibility, hazard and risk assessment. Although the size of the detached rock mass may differ considerably at each specific rock cliff, small rockfall (<100 m3) is the most frequent process. Empirical models may provide us with suitable information for predicting the travel distance of small rockfalls over an extensive area at a medium scale (1:100 000¿1:25 000). "Solà d'Andorra la Vella" is a rocky slope located close to the town of Andorra la Vella, where the government has been documenting rockfalls since 1999. This documentation consists in mapping the release point and the individual fallen blocks immediately after the event. The documentation of historical rockfalls by morphological analysis, eye-witness accounts and historical images serve to increase available information. In total, data from twenty small rockfalls have been gathered which reveal an amount of a hundred individual fallen rock blocks. The data acquired has been used to check the reliability of the main empirical models widely adopted (reach and shadow angle models) and to analyse the influence of parameters which affecting the travel distance (rockfall size, height of fall along the rock cliff and volume of the individual fallen rock block). For predicting travel distances in maps with medium scales, a method has been proposed based on the "reach probability" concept. The accuracy of results has been tested from the line entailing the farthest fallen boulders which represents the maximum travel distance of past rockfalls. The paper concludes with a discussion of the application of both empirical models to other study areas.
Resumo:
We propose a class of models of social network formation based on a mathematical abstraction of the concept of social distance. Social distance attachment is represented by the tendency of peers to establish acquaintances via a decreasing function of the relative distance in a representative social space. We derive analytical results (corroborated by extensive numerical simulations), showing that the model reproduces the main statistical characteristics of real social networks: large clustering coefficient, positive degree correlations, and the emergence of a hierarchy of communities. The model is confronted with the social network formed by people that shares confidential information using the Pretty Good Privacy (PGP) encryption algorithm, the so-called web of trust of PGP.
Resumo:
Objective: We used demographic and clinical data to design practical classification models for prediction of neurocognitive impairment (NCI) in people with HIV infection. Methods: The study population comprised 331 HIV-infected patients with available demographic, clinical, and neurocognitive data collected using a comprehensive battery of neuropsychological tests. Classification and regression trees (CART) were developed to btain detailed and reliable models to predict NCI. Following a practical clinical approach, NCI was considered the main variable for study outcomes, and analyses were performed separately in treatment-naïve and treatment-experienced patients. Results: The study sample comprised 52 treatment-naïve and 279 experienced patients. In the first group, the variables identified as better predictors of NCI were CD4 cell count and age (correct classification [CC]: 79.6%, 3 final nodes). In treatment-experienced patients, the variables most closely related to NCI were years of education, nadir CD4 cell count, central nervous system penetration-effectiveness score, age, employment status, and confounding comorbidities (CC: 82.1%, 7 final nodes). In patients with an undetectable viral load and no comorbidities, we obtained a fairly accurate model in which the main variables were nadir CD4 cell count, current CD4 cell count, time on current treatment, and past highest viral load (CC: 88%, 6 final nodes). Conclusion: Practical classification models to predict NCI in HIV infection can be obtained using demographic and clinical variables. An approach based on CART analyses may facilitate screening for HIV-associated neurocognitive disorders and complement clinical information about risk and protective factors for NCI in HIV-infected patients.