918 resultados para Models in art


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The basic hedonic hypothesis is that goods are valued for their utility-bearing characteristics and not for the good itself. Each attribute can be evaluated by consumers when making a purchasing decision and an implicit price can be identified for each of them. Thus, the observed price of a certain good can be analyzed as the sum of the implicit prices paid for each quality attribute. Literature has reported hedonic models estimates in the case of wines, which are excellent examples of differentiated goods worldwide.The impact of different wine attributes (intrinsic or extrinsic) on consumers’ willingness to pay has been analyzed with dissimilar results. Wines coming from "New World" producers seem to be appreciated for different attributes than wines produced in the "Old World". Moreover, "Old and New World" consumers seem to value differently the wine’s characteristics. To our knowledge, no cross country analysis has been done dealing with "New World" wines in "Old World" countries, leaving an important gap in understanding underlying attributes influencing buying decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our work is concerned with user modelling in open environments. Our proposal then is the line of contributions to the advances on user modelling in open environments thanks so the Agent Technology, in what has been called Smart User Model. Our research contains a holistic study of User Modelling in several research areas related to users. We have developed a conceptualization of User Modelling by means of examples from a broad range of research areas with the aim of improving our understanding of user modelling and its role in the next generation of open and distributed service environments. This report is organized as follow: In chapter 1 we introduce our motivation and objectives. Then in chapters 2, 3, 4 and 5 we provide the state-of-the-art on user modelling. In chapter 2, we give the main definitions of elements described in the report. In chapter 3, we present an historical perspective on user models. In chapter 4 we provide a review of user models from the perspective of different research areas, with special emphasis on the give-and-take relationship between Agent Technology and user modelling. In chapter 5, we describe the main challenges that, from our point of view, need to be tackled by researchers wanting to contribute to advances in user modelling. From the study of the state-of-the-art follows an exploratory work in chapter 6. We define a SUM and a methodology to deal with it. We also present some cases study in order to illustrate the methodology. Finally, we present the thesis proposal to continue the work, together with its corresponding work scheduling and temporalisation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cet essai a pour objet le rôle de la notion de fiction dans les domaines de l’art et de la science. Essentiellement, je soutiens que « fiction » dans ce contexte est « a category mistake » (concept versus genre) et je crois que cet essai peut réussir à « cuire du pain philosophique » en dévoilant une dispute verbale. Je suggère donc de clore un débat philosophique dans son intégralité. Je présente un exposé du style de fictionnalisme abordé par Catherine Z. Elgin et Nelson Goodman (que ce soit dans le contexte des arts ou des sciences, nous parvenons à la compréhension grâce à des fictions sous formes de « vérités non littérales ») et j’explore le concept de la fiction. Je soutiens que les représentations (textes descriptifs de toutes sortes, incluant les modèles) sont constituées d’éléments fictionnels et d’éléments facettés (à l’exception de la version idéale possible ou impossible, c’est-à-dire dans l’esprit de Dieu, qui n’inclurait que les facettes.) La compréhension ne peut provenir de la fiction, mais plutôt d’éléments facettés ordonnés de manière à créer une compréhension qui conduit généralement à des prédictions, des explications et des manipulations. Je définis les facettes comme ayant des caractéristiques organisées, alors que les fictions ont des caractéristiques désorganisées. La fiction dans son intégralité est donc, par définition, l’expression du néant (of nothing), ou en matière de langues idéales (mathématiques), l’expression de contradiction. Les fictions et les facettes relèvent des représentations qui sont elles-mêmes primitives. Les textes descriptifs sont donc fictionnels par degré. Les récits qui sont très fictionnels ont une certaine valeur (souvent ludique) mais contiennent toujours au moins une facette. En fin de compte, toutes les activités représentationnelles devraient être considérées irréelles, incomplètes, bien que parfois connectées à la réalité, c’est-à-dire, prises entre une description réaliste facettée et une fiction dans son intégralité.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we compare three residuals to assess departures from the error assumptions as well as to detect outlying observations in log-Burr XII regression models with censored observations. These residuals can also be used for the log-logistic regression model, which is a special case of the log-Burr XII regression model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the modified martingale-type residual in log-Burr XII regression models with censored data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Baccharis dracunculifolia DC (Asteraceae), a native plant from Brazil, commonly known as `Alecrim-do-campo` is widely used in folk medicine to treat inflammation, hepatic disorders and stomach ulcers, and it is the most important botanical source of Southeastern Brazilian propolis, known as green propolis. Its essential oil is composed of non-oxygenated and oxygenated terpenes. In this work, the effects of the essential oil obtained from the aerial parts of R dracunculifolia on gastric ulcers were evaluated. The antiulcer assays were undertaken using the following protocols in rats: nonsteroidal antiinflammatory drug (NSAID)-induced ulcer, ethanol-induced ulcer, stress-induced ulcer, and determination of gastric secretion using ligated pylorus. The treatment in the doses of 50, 250 and 500 mg/kg of R dracunculifolia essential oil significantly diminished the lesion index, the total lesion area and the percentage of lesions in comparison with both positive and negative control groups. With regard to the model of gastric secretion a reduction of gastric juice volume and total acidity was observed, as well as an increase in the gastric pH. No sign of toxicity was observed in the acute toxicity study. Considering the results, it is suggested that the essential oil of B. dracunculifolia could probably be a good therapeutic agent for the development of new phytotherapeutic medicine for the treatment of gastric ulcer. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of mathematical models have been used to describe percutaneous absorption kinetics. In general, most of these models have used either diffusion-based or compartmental equations. The object of any mathematical model is to a) be able to represent the processes associated with absorption accurately, b) be able to describe/summarize experimental data with parametric equations or moments, and c) predict kinetics under varying conditions. However, in describing the processes involved, some developed models often suffer from being of too complex a form to be practically useful. In this chapter, we attempt to approach the issue of mathematical modeling in percutaneous absorption from four perspectives. These are to a) describe simple practical models, b) provide an overview of the more complex models, c) summarize some of the more important/useful models used to date, and d) examine sonic practical applications of the models. The range of processes involved in percutaneous absorption and considered in developing the mathematical models in this chapter is shown in Fig. 1. We initially address in vitro skin diffusion models and consider a) constant donor concentration and receptor conditions, b) the corresponding flux, donor, skin, and receptor amount-time profiles for solutions, and c) amount- and flux-time profiles when the donor phase is removed. More complex issues, such as finite-volume donor phase, finite-volume receptor phase, the presence of an efflux. rate constant at the membrane-receptor interphase, and two-layer diffusion, are then considered. We then look at specific models and issues concerned with a) release from topical products, b) use of compartmental models as alternatives to diffusion models, c) concentration-dependent absorption, d) modeling of skin metabolism, e) role of solute-skin-vehicle interactions, f) effects of vehicle loss, a) shunt transport, and h) in vivo diffusion, compartmental, physiological, and deconvolution models. We conclude by examining topics such as a) deep tissue penetration, b) pharmacodynamics, c) iontophoresis, d) sonophoresis, and e) pitfalls in modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper offers a defense of backwards in time causation models in quantum mechanics. Particular attention is given to Cramer's transactional account, which is shown to have the threefold virtue of solving the Bell problem, explaining the complex conjugate aspect of the quantum mechanical formalism, and explaining various quantum mysteries such as Schrodinger's cat. The question is therefore asked, why has this model not received more attention from physicists and philosophers? One objection given by physicists in assessing Cramer's theory was that it is not testable. This paper seeks to answer this concern by utilizing an argument that backwards causation models entail a fork theory of causal direction. From the backwards causation model together with the fork theory one can deduce empirical predictions. Finally, the objection that this strategy is questionable because of its appeal to philosophy is deflected.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaluation of the performance of the APACHE III (Acute Physiology and Chronic Health Evaluation) ICU (intensive care unit) and hospital mortality models at the Princess Alexandra Hospital, Brisbane is reported. Prospective collection of demographic, diagnostic, physiological, laboratory, admission and discharge data of 5681 consecutive eligible admissions (1 January 1995 to 1 January 2000) was conducted at the Princess Alexandra Hospital, a metropolitan Australian tertiary referral medical/surgical adult ICU. ROC (receiver operating characteristic) curve areas for the APACHE III ICU mortality and hospital mortality models demonstrated excellent discrimination. Observed ICU mortality (9.1%) was significantly overestimated by the APACHE III model adjusted for hospital characteristics (10.1%), but did not significantly differ from the prediction of the generic APACHE III model (8.6%). In contrast, observed hospital mortality (14.8%) agreed well with the prediction of the APACHE III model adjusted for hospital characteristics (14.6%), but was significantly underestimated by the unadjusted APACHE III model (13.2%). Calibration curves and goodness-of-fit analysis using Hosmer-Lemeshow statistics, demonstrated that calibration was good with the unadjusted APACHE III ICU mortality model, and the APACHE III hospital mortality model adjusted for hospital characteristics. Post hoc analysis revealed a declining annual SMR (standardized mortality rate) during the study period. This trend was present in each of the non-surgical, emergency and elective surgical diagnostic groups, and the change was temporally related to increased specialist staffing levels. This study demonstrates that the APACHE III model performs well on independent assessment in an Australian hospital. Changes observed in annual SMR using such a validated model support an hypothesis of improved survival outcomes 1995-1999.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.