151 resultados para Advanced Transaction Models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Risk theory has been a very active research area over the last decades. The main objectives of the theory are to find adequate stochastic processes which can model the surplus of a (non-life) insurance company and to analyze the risk related quantities such as ruin time, ruin probability, expected discounted penalty function and expected discounted dividend/tax payments. The study of these ruin related quantities provides crucial information for actuaries and decision makers. This thesis consists of the study of four different insurance risk models which are essentially related. The ruin and related quantities are investigated by using different techniques, resulting in explicit or asymptotic expressions for the ruin time, the ruin probability, the expected discounted penalty function and the expected discounted tax payments. - La recherche en théorie du risque a été très dynamique au cours des dernières décennies. D'un point de vue théorique, les principaux objectifs sont de trouver des processus stochastiques adéquats permettant de modéliser le surplus d'une compagnie d'assurance non vie et d'analyser les mesures de risque, notamment le temps de ruine, la probabilité de ruine, l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes. L'étude de ces mesures associées à la ruine fournit des informations cruciales pour les actuaires et les décideurs. Cette thèse consiste en l'étude des quatre différents modèles de risque d'assurance qui sont essentiellement liés. La ruine et les mesures qui y sont associées sont examinées à l'aide de différentes techniques, ce qui permet d'induire des expressions explicites ou asymptotiques du temps de ruine, de la probabilité de ruine, de l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combustion-derived and manufactured nanoparticles (NPs) are known to provoke oxidative stress and inflammatory responses in human lung cells; therefore, they play an important role during the development of adverse health effects. As the lungs are composed of more than 40 different cell types, it is of particular interest to perform toxicological studies with co-cultures systems, rather than with monocultures of only one cell type, to gain a better understanding of complex cellular reactions upon exposure to toxic substances. Monocultures of A549 human epithelial lung cells, human monocyte-derived macrophages and monocyte-derived dendritic cells (MDDCs) as well as triple cell co-cultures consisting of all three cell types were exposed to combustion-derived NPs (diesel exhaust particles) and to manufactured NPs (titanium dioxide and single-walled carbon nanotubes). The penetration of particles into cells was analysed by transmission electron microscopy. The amount of intracellular reactive oxygen species (ROS), the total antioxidant capacity (TAC) and the production of tumour necrosis factor (TNF)-a and interleukin (IL)-8 were quantified. The results of the monocultures were summed with an adjustment for the number of each single cell type in the triple cell co-culture. All three particle types were found in all cell and culture types. The production of ROS was induced by all particle types in all cell cultures except in monocultures of MDDCs. The TAC and the (pro-)inflammatory reactions were not statistically significantly increased by particle exposure in any of the cell cultures. Interestingly, in the triple cell co-cultures, the TAC and IL-8 concentrations were lower and the TNF-a concentrations were higher than the expected values calculated from the monocultures. The interplay of different lung cell types seems to substantially modulate the oxidative stress and the inflammatory responses after NP exposure. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: VeriStrat(®) is a serum proteomic test used to determine whether patients with advanced non-small cell lung cancer (NSCLC) who have already received chemotherapy are likely to have good or poor outcomes from treatment with gefitinib or erlotinib. The main objective of our retrospective study was to evaluate the role of VS as a marker of overall survival (OS) in patients treated with erlotinib and bevacizumab in the first line. PATIENTS AND METHODS: Patients were pooled from two phase II trials (SAKK19/05 and NTR528). For survival analyses, a log-rank test was used to determine if there was a statistically significant difference between groups. The hazard ratio (HR) of any separation was assessed using Cox proportional hazards models. RESULTS: 117 patients were analyzed. VeriStrat classified patients into two groups which had a statistically significant difference in duration of OS (p=0.0027, HR=0.480, 95% confidence interval: 0.294-0.784). CONCLUSION: VeriStrat has a prognostic role in patients with advanced, nonsquamous NSCLC treated with erlotinib and bevacizumab in the first line. Further work is needed to study the predictive role of VeriStrat for erlotinib and bevacizumab in chemotherapy-untreated patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep disorders are very prevalent and represent an emerging worldwide epidemic. However, research into the molecular genetics of sleep disorders remains surprisingly one of the least active fields. Nevertheless, rapid progress is being made in several prototypical disorders, leading recently to the identification of the molecular pathways underlying narcolepsy and familial advanced sleep-phase syndrome. Since the first reports of spontaneous and induced loss-of-function mutations leading to hypocretin deficiency in human and animal models of narcolepsy, the role of this novel neurotransmission pathway in sleep and several other behaviors has gained extensive interest. Also, very recent studies using an animal model of familial advanced sleep-phase syndrome shed new light on the regulation of circadian rhythms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of social relationships on the maintenance of independence over periods of 12-18 months in a group of 306 octogenarians is assessed in this study. The study is based on the results of the Swilsoo (Swiss Interdisciplinary Longitudinal Study on the Oldest Old). Participants (80-84 years old at baseline) were interviewed five times between 1994 and 1999. Independence was defined as the capacity to perform without assistance eight activities of daily living. We distinguished in our analyses kinship and friendship networks and evaluated social relationships with the help of a series of variables serving as indicators of network composition and contact frequency. Logistic regression models were used to identify the short-term effects of social relationships on independence, after controlling for sociodemographic and health-related variables; independence at a given wave of interviews was interpreted in the light of social factors measured at the previous wave. Our analyses indicate that the existence of a close friend has a significant impact on the maintenance of independence (OR=1.58, p<0.05), which is not the case with the other variables concerning network composition. Kinship contacts were also observed to have a positive impact on independence (OR=1.12, p<0.01).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Clinical guidelines are essential in implementing and maintaining nationwide stage-specific diagnostic and therapeutic standards. In 2011, the first German expert consensus guideline defined the evidence for diagnosis and treatment of early and locally advanced esophagogastric cancers. Here, we compare this guideline with other national guidelines as well as current literature. METHODS: The German S3-guideline used an approved development process with de novo literature research, international guideline adaptation, or good clinical practice. Other recent evidence-based national guidelines and current references were compared with German recommendations. RESULTS: In the German S3 and other Western guidelines, adenocarcinomas of the esophagogastric junction (AEG) are classified according to formerly defined AEG I-III subgroups due to the high surgical impact. To stage local disease, computed tomography of the chest and abdomen and endosonography are reinforced. In contrast, laparoscopy is optional for staging. Mucosal cancers (T1a) should be endoscopically resected "en-bloc" to allow complete histological evaluation of lateral and basal margins. For locally advanced cancers of the stomach or esophagogastric junction (≥T3N+), preferred treatment is preoperative and postoperative chemotherapy. Preoperative radiochemotherapy is an evidence-based alternative for large AEG type I-II tumors (≥T3N+). Additionally, some experts recommend treating T2 tumors with a similar approach, mainly because pretherapeutic staging is often considered to be unreliable. CONCLUSIONS: The German S3 guideline represents an up-to-date European position with regard to diagnosis, staging, and treatment recommendations for patients with locally advanced esophagogastric cancer. Effects of perioperative chemotherapy versus chemoradiotherapy are still to be investigated for adenocarcinoma of the cardia and the lower esophagus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Biliary tract cancer is an uncommon cancer with a poor outcome. We assembled data from the National Cancer Research Institute (UK) ABC-02 study and 10 international studies to determine prognostic outcome characteristics for patients with advanced disease. METHODS: Multivariable analyses of the final dataset from the ABC-02 study were carried out. All variables were simultaneously included in a Cox proportional hazards model, and backward elimination was used to produce the final model (using a significance level of 10%), in which the selected variables were associated independently with outcome. This score was validated externally by receiver operating curve (ROC) analysis using the independent international dataset. RESULTS: A total of 410 patients were included from the ABC-02 study and 753 from the international dataset. An overall survival (OS) and progression-free survival (PFS) Cox model was derived from the ABC-02 study. White blood cells, haemoglobin, disease status, bilirubin, neutrophils, gender, and performance status were considered prognostic for survival (all with P < 0.10). Patients with metastatic disease {hazard ratio (HR) 1.56 [95% confidence interval (CI) 1.20-2.02]} and Eastern Cooperative Oncology Group performance status (ECOG PS) 2 had worse survival [HR 2.24 (95% CI 1.53-3.28)]. In a dataset restricted to patients who received cisplatin and gemcitabine with ECOG PS 0 and 1, only haemoglobin, disease status, bilirubin, and neutrophils were associated with PFS and OS. ROC analysis suggested the models generated from the ABC-02 study had a limited prognostic value [6-month PFS: area under the curve (AUC) 62% (95% CI 57-68); 1-year OS: AUC 64% (95% CI 58-69)]. CONCLUSION: These data propose a set of prognostic criteria for outcome in advanced biliary tract cancer derived from the ABC-02 study that are validated in an international dataset. Although these findings establish the benchmark for the prognostic evaluation of patients with ABC and confirm the value of longheld clinical observations, the ability of the model to correctly predict prognosis is limited and needs to be improved through identification of additional clinical and molecular markers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abiotic factors are considered strong drivers of species distribution and assemblages. Yet these spatial patterns are also influenced by biotic interactions. Accounting for competitors or facilitators may improve both the fit and the predictive power of species distribution models (SDMs). We investigated the influence of a dominant species, Empetrum nigrum ssp. hermaphroditum, on the distribution of 34 subordinate species in the tundra of northern Norway. We related SDM parameters of those subordinate species to their functional traits and their co-occurrence patterns with E. hermaphroditum across three spatial scales. By combining both approaches, we sought to understand whether these species may be limited by competitive interactions and/or benefit from habitat conditions created by the dominant species. The model fit and predictive power increased for most species when the frequency of occurrence of E. hermaphroditum was included in the SDMs as a predictor. The largest increase was found for species that 1) co-occur most of the time with E. hermaphroditum, both at large (i.e. 750 m) and small spatial scale (i.e. 2 m) or co-occur with E. hermaphroditum at large scale but not at small scale and 2) have particularly low or high leaf dry matter content (LDMC). Species that do not co-occur with E. hermaphroditum at the smallest scale are generally palatable herbaceous species with low LDMC, thus showing a weak ability to tolerate resource depletion that is directly or indirectly induced by E. hermaphroditum. Species with high LDMC, showing a better aptitude to face resource depletion and grazing, are often found in the proximity of E. hermaphroditum. Our results are consistent with previous findings that both competition and facilitation structure plant distribution and assemblages in the Arctic tundra. The functional and co-occurrence approaches used were complementary and provided a deeper understanding of the observed patterns by refinement of the pool of potential direct and indirect ecological effects of E. hermaphroditum on the distribution of subordinate species. Our correlative study would benefit being complemented by experimental approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. METHODS: This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. RESULTS: Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. CONCLUSIONS: The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.