948 resultados para semi-empirical methods
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Fluvial deposits are a challenge for modelling flow in sub-surface reservoirs. Connectivity and continuity of permeable bodies have a major impact on fluid flow in porous media. Contemporary object-based and multipoint statistics methods face a problem of robust representation of connected structures. An alternative approach to model petrophysical properties is based on machine learning algorithm ? Support Vector Regression (SVR). Semi-supervised SVR is able to establish spatial connectivity taking into account the prior knowledge on natural similarities. SVR as a learning algorithm is robust to noise and captures dependencies from all available data. Semi-supervised SVR applied to a synthetic fluvial reservoir demonstrated robust results, which are well matched to the flow performance
Resumo:
BACKGROUND: In low-mortality countries, life expectancy is increasing steadily. This increase can be disentangled into two separate components: the delayed incidence of death (i.e. the rectangularization of the survival curve) and the shift of maximal age at death to the right (i.e. the extension of longevity). METHODS: We studied the secular increase of life expectancy at age 50 in nine European countries between 1922 and 2006. The respective contributions of rectangularization and longevity to increasing life expectancy are quantified with a specific tool. RESULTS: For men, an acceleration of rectangularization was observed in the 1980s in all nine countries, whereas a deceleration occurred among women in six countries in the 1960s. These diverging trends are likely to reflect the gender-specific trends in smoking. As for longevity, the extension was steady from 1922 in both genders in almost all countries. The gain of years due to longevity extension exceeded the gain due to rectangularization. This predominance over rectangularization was still observed during the most recent decades. CONCLUSIONS: Disentangling life expectancy into components offers new insights into the underlying mechanisms and possible determinants. Rectangularization mainly reflects the secular changes of the known determinants of early mortality, including smoking. Explaining the increase of maximal age at death is a more complex challenge. It might be related to slow and lifelong changes in the socio-economic environment and lifestyles as well as population composition. The still increasing longevity does not suggest that we are approaching any upper limit of human longevity.
Resumo:
Gaseous N losses from soil are considerable, resulting mostly from ammonia volatilization linked to agricultural activities such as pasture fertilization. The use of simple and accessible measurement methods of such losses is fundamental in the evaluation of the N cycle in agricultural systems. The purpose of this study was to evaluate quantification methods of NH3 volatilization from fertilized surface soil with urea, with minimal influence on the volatilization processes. The greenhouse experiment was arranged in a completely randomized design with 13 treatments and five replications, with the following treatments: (1) Polyurethane foam (density 20 kg m-3) with phosphoric acid solution absorber (foam absorber), installed 1, 5, 10 and 20 cm above the soil surface; (2) Paper filter with sulfuric acid solution absorber (paper absorber, 1, 5, 10 and 20 cm above the soil surface); (3) Sulfuric acid solution absorber (1, 5 and 10 cm above the soil surface); (4) Semi-open static collector; (5) 15N balance (control). The foam absorber placed 1 cm above the soil surface estimated the real daily rate of loss and accumulated loss of NH3N and proved efficient in capturing NH3 volatized from urea-treated soil. The estimates based on acid absorbers 1, 5 and 10 cm above the soil surface and paper absorbers 1 and 5 cm above the soil surface were only realistic for accumulated N-NH3 losses. Foam absorbers can be indicated to quantify accumulated and daily rates of NH3 volatilization losses similarly to an open static chamber, making calibration equations or correction factors unnecessary.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
"Most quantitative empirical analyses are motivated by the desire to estimate the causal effect of an independent variable on a dependent variable. Although the randomized experiment is the most powerful design for this task, in most social science research done outside of psychology, experimental designs are infeasible. (Winship & Morgan, 1999, p. 659)." This quote from earlier work by Winship and Morgan, which was instrumental in setting the groundwork for their book, captures the essence of our review of Morgan and Winship's book: It is about causality in nonexperimental settings.
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Resumo:
In recent grammars and dictionaries also (`therefore, so, well¿) continues to be preferably presented as an adverb with a conclusive-consecutive connective function that essentially corresponds to its use in formal written German. Its function as a modal particle is documented, however, since the beginnings of what is known as Partikelforschung, though not all its uses have been systematically investigated contrasting oral and written German, either in mode or concept. In this article we analyse the uses of also in semi-informal oral interactions on the basis of empirical data (from a subsample of the VARCOM corpus). Specifically, we will analyse the presence and frequency of also at the beginning of a sentence or sequence, the functions it serves as a logical-semantic connector or discourse and interaction marker and the interrelations between these functions, in order to contrast these results with the description of also provided by current reference works.
Resumo:
Five years after the 2005 Pakistan earthquake that triggered multiple mass movements, landslides continue to pose a threat to the population of Azad Kashmir, especially during heavy monsoon rains. The thousands of landslides that were triggered by the 7.6 magnitude earthquake in 2005 were not just due to a natural phenomenon but largely induced by human activities, namely, road building, grazing, and deforestation. The damage caused by the landslides in the study area (381 km2) is estimated at 3.6 times the annual public works budget of Azad Kashmir for 2005 of US$ 1 million. In addition to human suffering, this cost constitutes a significant economic setback to the region that could have been reduced through improved land use and risk management. This article describes interdisciplinary research conducted 18 months after the earthquake to provide a more systemic approach to understanding risks posed by landslides, including the physical, environmental, and human contexts. The goal of this research is twofold: to present empirical data on the social, geological, and environmental contexts in which widespread landslides occurred following the 2005 earthquake; and, second, to describe straightforward methods that can be used for integrated landslide risk assessments in data-poor environments. The article analyzes limitations of the methodologies and challenges for conducting interdisciplinary research that integrates both social and physical data. This research concludes that reducing landslide risk is ultimately a management issue, based in land use decisions and governance.
Resumo:
Background: This trial was conducted to evaluate the safety and immunogenicity of two virosome formulated malaria peptidomimetics derived from Plasmodium falciparum AMA-1 and CSP in malaria semi-immune adults and children.Methods: The design was a prospective randomized, double-blind, controlled, age-deescalating study with two immunizations. 10 adults and 40 children (aged 5-9 years) living in a malaria endemic area were immunized with PEV3B or virosomal influenza vaccine Inflexal (R) V on day 0 and 90.Results: No serious or severe adverse events (AEs) related to the vaccines were observed. The only local solicited AE reported was pain at injection site, which affected more children in the Inflexal (R) V group compared to the PEV3B group (p = 0.014). In the PEV3B group, IgG ELISA endpoint titers specific for the AMA-1 and CSP peptide antigens were significantly higher for most time points compared to the Inflexal (R) V control group. Across all time points after first immunization the average ratio of endpoint titers to baseline values in PEV3B subjects ranged from 4 to 15 in adults and from 4 to 66 in children. As an exploratory outcome, we found that the incidence rate of clinical malaria episodes in children vaccinees was half the rate of the control children between study days 30 and 365 (0.0035 episodes per day at risk for PEV3B vs. 0.0069 for Inflexal (R) V; RR = 0.50 [95%-CI: 0.29-0.88], p = 0.02).Conclusion: These findings provide a strong basis for the further development of multivalent virosomal malaria peptide vaccines.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
In my thesis I present the findings of a multiple-case study on the CSR approach of three multinational companies, applying Basu and Palazzo's (2008) CSR-character as a process model of sensemaking, Suchman's (1995) framework on legitimation strategies, and Habermas (1996) concept of deliberative democracy. The theoretical framework is based on the assumption of a postnational constellation (Habermas, 2001) which sends multinational companies onto a process of sensemaking (Weick, 1995) with regards to their responsibilities in a globalizing world. The major reason is that mainstream CSR-concepts are based on the assumption of a liberal market economy embedded in a nation state that do not fit the changing conditions for legitimation of corporate behavior in a globalizing world. For the purpose of this study, I primarily looked at two research questions: (i) How can the CSR approach of a multinational corporation be systematized empirically? (ii) What is the impact of the changing conditions in the postnational constellation on the CSR approach of the studied multinational corporations? For the analysis, I adopted a holistic approach (Patton, 1980), combining elements of a deductive and inductive theory building methodology (Eisenhardt, 1989b; Eisenhardt & Graebner, 2007; Glaser & Strauss, 1967; Van de Ven, 1992) and rigorous qualitative data analysis. Primary data was collected through 90 semi-structured interviews in two rounds with executives and managers in three multinational companies and their respective stakeholders. Raw data originating from interview tapes, field notes, and contact sheets was processed, stored, and managed using the software program QSR NVIVO 7. In the analysis, I applied qualitative methods to strengthen the interpretative part as well as quantitative methods to identify dominating dimensions and patterns. I found three different coping behaviors that provide insights into the corporate mindset. The results suggest that multinational corporations increasingly turn towards relational approaches of CSR to achieve moral legitimacy in formalized dialogical exchanges with their stakeholders since legitimacy can no longer be derived only from a national framework. I also looked at the degree to which they have reacted to the postnational constellation by the assumption of former state duties and the underlying reasoning. The findings indicate that CSR approaches become increasingly comprehensive through integrating political strategies that reflect the growing (self-) perception of multinational companies as political actors. Based on the results, I developed a model which relates the different dimensions of corporate responsibility to the discussion on deliberative democracy, global governance and social innovation to provide guidance for multinational companies in a postnational world. With my thesis, I contribute to management research by (i) delivering a comprehensive critique of the mainstream CSR-literature and (ii) filling the gap of thorough qualitative research on CSR in a globalizing world using the CSR-character as an empirical device, and (iii) to organizational studies by further advancing a deliberative view of the firm proposed by Scherer and Palazzo (2008).
Resumo:
BACKGROUND: Intravenously administered antimicrobial agents have been the standard choice for the empirical management of fever in patients with cancer and granulocytopenia. If orally administered empirical therapy is as effective as intravenous therapy, it would offer advantages such as improved quality of life and lower cost. METHODS: In a prospective, open-label, multicenter trial, we randomly assigned febrile patients with cancer who had granulocytopenia that was expected to resolve within 10 days to receive empirical therapy with either oral ciprofloxacin (750 mg twice daily) plus amoxicillin-clavulanate (625 mg three times daily) or standard daily doses of intravenous ceftriaxone plus amikacin. All patients were hospitalized until their fever resolved. The primary objective of the study was to determine whether there was equivalence between the regimens, defined as an absolute difference in the rates of success of 10 percent or less. RESULTS: Equivalence was demonstrated at the second interim analysis, and the trial was terminated after the enrollment of 353 patients. In the analysis of the 312 patients who were treated according to the protocol and who could be evaluated, treatment was successful in 86 percent of the patients in the oral-therapy group (95 percent confidence interval, 80 to 91 percent) and 84 percent of those in the intravenous-therapy group (95 percent confidence interval, 78 to 90 percent; P=0.02). The results were similar in the intention-to-treat analysis (80 percent and 77 percent, respectively; P=0.03), as were the duration of fever, the time to a change in the regimen, the reasons for such a change, the duration of therapy, and survival. The types of adverse events differed slightly between the groups but were similar in frequency. CONCLUSIONS: In low-risk patients with cancer who have fever and granulocytopenia, oral therapy with ciprofloxacin plus amoxicillin-clavulanate is as effective as intravenous therapy.
Resumo:
This Phase II follow-up study of IHRB Project TR-473 focused on the performance evaluation of rubblized pavements in Iowa. The primary objective of this study was to evaluate the structural condition of existing rubblized concrete pavements across Iowa through Falling Weight Deflectometer (FWD) tests, Dynamic Cone Penetrometer (DCP) tests, visual pavement distress surveys, etc. Through backcalculation of FWD deflection data using the Iowa State University's advanced layer moduli backcalculation program, the rubblized layer moduli were determined for various projects and compared with each other for correlating with the long-term pavement performance. The AASHTO structural layer coefficient for rubblized layer was also calculated using the rubblized layer moduli. To validate the mechanistic-empirical (M-E) hot mix asphalt (HMA) overlay thickness design procedure developed during the Phase I study, the actual HMA overlay thicknesses from the rubblization projects were compared with the predicted thicknesses obtained from the design software. The results of this study show that rubblization is a valid option to use in Iowa in the rehabilitation of portland cement concrete pavements provided the foundation is strong enough to support construction operations during the rubblization process. The M-E structural design methodology developed during Phase I can estimate the HMA overlay thickness reasonably well to achieve long-lasting performance of HMA pavements. The rehabilitation strategy is recommended for continued use in Iowa under those conditions conducive for rubblization.