57 resultados para qualitative data analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyzing functional data often leads to finding common factors, for which functional principal component analysis proves to be a useful tool to summarize and characterize the random variation in a function space. The representation in terms of eigenfunctions is optimal in the sense of L-2 approximation. However, the eigenfunctions are not always directed towards an interesting and interpretable direction in the context of functional data and thus could obscure the underlying structure. To overcome such difficulty, an alternative to functional principal component analysis is proposed that produces directed components which may be more informative and easier to interpret. These structural components are similar to principal components, but are adapted to situations in which the domain of the function may be decomposed into disjoint intervals such that there is effectively independence between intervals and positive correlation within intervals. The approach is demonstrated with synthetic examples as well as real data. Properties for special cases are also studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Previous literature has suggested that laws and regulations may impact the use of palliative sedation. Our present study compares the attitudes of French-speaking physicians practicing in the Quebec and Swiss environments, where different laws are in place regarding physician-assisted suicide. METHOD: Data were drawn from two prior studies, one by Blondeau and colleagues and another by Beauverd and coworkers, employing the same two-by-two experimental design with length of prognosis and type of suffering as independent variables. Both the effect of these variables and the effect of their interaction on Swiss and Quebec physicians' attitudes toward sedation were compared. The written comments of respondents were submitted to a qualitative content analysis and summarized in a comparative perspective. RESULTS: The analysis of variance showed that only the type of suffering had an effect on physicians' attitudes toward sedation. The results of the Wilcoxon test indicated that the attitudes of physicians from Quebec and Switzerland tended to be different for two vignettes: long-term prognosis with existential suffering (p = 0.0577) and short-term prognosis with physical suffering (p = 0.0914). In both cases, the Swiss physicians were less prone to palliative sedation. SIGNIFICANCE OF RESULTS: The attitudes of physicians from Quebec and Switzerland toward palliative sedation, particularly regarding prognosis and type of suffering, seem similar. However, the results suggest that physicians from Quebec could be slightly more open to palliative sedation, even though most were not in favor of this practice as an answer to end-of-life existential suffering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hypnosis is recognised in medicine as an effective complementary therapy. However, few qualitative data are available concerning the benefits it may bring. This qualitative exploratory study aimed to examine the contribution of hypnosis to the care of advanced cancer patients. Results demonstrate that hypnosis is an effective and efficient means of developing the resources of people suffering from serious illness. After an average of four hypnotherapy sessions, patients said they were able to locate previously unexploited resources within themselves and were able to become autonomous in the use of self-hypnosis. The major benefit reported concerned a reduction in anxiety. For patients experiencing anxiety about death, hypnosis allowed them, within a therapeutic environment perceived as safe, to explore different facets of their fears and to develop adaptive strategies. Aside from slight fatigue experienced during the sessions, no adverse side-effects were reported. In conclusion, this study exploring the effects of hypnosis allowed us to identify important benefits for patients suffering from advanced cancer. Consequently, replication on a larger scale is recommended in order to ascertain the extent to which it is possible to generalise from these results and in order better to define the characteristics of patients most likely to benefit from this therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PRINCIPLES: Advance directives are seen as an important tool for documenting the wishes of patients who are no longer competent to make decisions in regards to their medical care. Due to their nature, approaching the subject of advance directives with a patient can be difficult for both the medical care provider and the patient. This paper focuses on general practitioners' perspectives regarding the timing at which this discussion should take place, as well as the advantages and disadvantages of the different moments. METHODS: In 2013, 23 semi-structured face-to-face interviews were performed with Swiss general practitioners. Interviews were analysed using qualitative content analysis. RESULTS: In our sample, 23 general practitioners provided different options that they felt were appropriate moments: either (a) when the patient is still healthy, (b) when illness becomes predominant, or (c) when a patient has been transferred to a long-term care facility. Furthermore, general practitioners reported uncertainty and discomfort regarding initiating the discussion. CONCLUSION: The distinct approaches, perspectives and rationales show that there is no well-defined or "right" moment. However, participants often associated advance directives with death. This link caused discomfort and uncertainty, which led to hesitation and delay on the part of general practitioners. Therefore we recommend further training on how to professionally initiate a conversation about advance directives. Furthermore, based on our results and experience, we recommend an early approach with healthy patients paired with later regular updates as it seems to be the most effective way to inform patients about their end-of-life care options.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this IRB-approved study was to analyze prospectively quality of life (QOL) and psychological changes in 30 ESRD patients before and after kidney transplantation (KT). Semi-structured interviews were conducted after inclusion on the waiting list (A). Follow-up interviews were performed 6 months later with patients still awaiting KT (B6, n= 15), and with transplant recipients 6, 12 and 24 months after KT (C6, n=15; C12, n=15; C24, n=14). Qualitative thematic analysis was performed. A: All patients reported loss of freedom, 87% tried to maintain normality; 57% modified medical directives. All mentioned emotional fragility, negative thoughts (43%), and suicidal thoughts (20%) related to loss of QOL from dialysis (D), and professional tension (26%). B6: 40% reported no change compared to baseline, while 60% mentioned increase of illness intrusiveness, 46% D side effects, 40% communication problems, and 33% concerns about the waiting list handling. Fear of emotional breakdown (40%), couple problems (47%), and worsened professional difficulties (20%) were reported. C6: All patients reported recovery of QOL and concerns about acute rejection. 73% were anxious about laboratory results. 93% felt dependent on immunosuppressants (IS), 47% reported difficulties coping with their regimen, and 47% were concerned about side effects; 67% had resumed work, but medical constraints led 40% to professional stigmatization. C12: All enjoyed good QOL. Adherence to IS was mandatory (100%). All were aware of the limited long-term graft survival and 47% anxious about a possible return to D. 60% underlined positive life value; 47% resumed a full time job; 40% were on social security. C24: Good QOL was underlined (86%). Patients stated they would prefer re-TX to resuming D (71%). Post-TX health problems were mentioned (64%); increase of creatinine levels induced fear (36%). 79% complained about side effects. 64% reported changes in life values. This study reveals positive QOL and psychological transformations after KT, which are associated with positive changes related to graft survival and freedom from D. Psychological follow-up should be offered to patients who face relapsing ESRD or post-TX co-morbidities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research on regulation has crossed paths with the literature on policy instruments, showing that regulatory policy instruments contain cognitive and normative beliefs about policy. Thus, their usage stacks the deck in favor of one type of actor or one type of regulatory solution. In this article, we challenge the assumption that there is a predetermined relationship between ideas, regulatory policy instruments, and outcomes. We argue that different combinations of conditions lead to different outcomes, depending on how actors use the instrument. Empirically, we analyze 31 EU and UK case studies of regulatory impact assessment (RIA) - a regulatory policy instrument that has been pivotal in the so-called better regulation movement. We distinguish four main usages of RIA, that is, political, instrumental, communicative, and perfunctory. We find that in our sample instrumental usage is not so rare and that the contrast between communicative and political usages is less stark than is commonly thought. In terms of policy recommendations, our analysis suggests that there may be different paths to desirable outcomes. Policymakers should therefore explore different combinations of conditions leading to the usages they deem desirable rather than arguing for a fixed menu of variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of synthetic combinatorial peptide libraries in positional scanning format (PS-SCL) has emerged recently as an alternative approach for the identification of peptides recognized by T lymphocytes. The choice of both the PS-SCL used for screening experiments and the method used for data analysis are crucial for implementing this approach. With this aim, we tested the recognition of different PS-SCL by a tyrosinase 368-376-specific CTL clone and analyzed the data obtained with a recently developed biometric data analysis based on a model of independent and additive contribution of individual amino acids to peptide antigen recognition. Mixtures defined with amino acids present at the corresponding positions in the native sequence were among the most active for all of the libraries. Somewhat surprisingly, a higher number of native amino acids were identifiable by using amidated COOH-terminal rather than free COOH-terminal PS-SCL. Also, our data clearly indicate that when using PS-SCL longer than optimal, frame shifts occur frequently and should be taken into account. Biometric analysis of the data obtained with the amidated COOH-terminal nonapeptide library allowed the identification of the native ligand as the sequence with the highest score in a public human protein database. However, the adequacy of the PS-SCL data for the identification for the peptide ligand varied depending on the PS-SCL used. Altogether these results provide insight into the potential of PS-SCL for the identification of CTL-defined tumor-derived antigenic sequences and may significantly implement our ability to interpret the results of these analyses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantitative information from magnetic resonance imaging (MRI) may substantiate clinical findings and provide additional insight into the mechanism of clinical interventions in therapeutic stroke trials. The PERFORM study is exploring the efficacy of terutroban versus aspirin for secondary prevention in patients with a history of ischemic stroke. We report on the design of an exploratory longitudinal MRI follow-up study that was performed in a subgroup of the PERFORM trial. An international multi-centre longitudinal follow-up MRI study was designed for different MR systems employing safety and efficacy readouts: new T2 lesions, new DWI lesions, whole brain volume change, hippocampal volume change, changes in tissue microstructure as depicted by mean diffusivity and fractional anisotropy, vessel patency on MR angiography, and the presence of and development of new microbleeds. A total of 1,056 patients (men and women ≥ 55 years) were included. The data analysis included 3D reformation, image registration of different contrasts, tissue segmentation, and automated lesion detection. This large international multi-centre study demonstrates how new MRI readouts can be used to provide key information on the evolution of cerebral tissue lesions and within the macrovasculature after atherothrombotic stroke in a large sample of patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research was conducted in the context of the project IRIS 8A Health and Society (2002-2008) and financially supported by the University of Lausanne. It was aomed at developping a model based on the elder people's experience and allowed us to develop a "Portrait evaluation" of fear of falling using their examples and words. It is a very simple evaluation, which can be used by professionals, but by the elder people themselves. The "Portrait evaluation" and the user's guide are on free access, but we would very much approciate to know whether other people or scientists have used it and collect their comments. (contact: Chantal.Piot-Ziegler@unil.ch)The purpose of this study is to create a model grounded in the elderly people's experience allowing the development of an original instrument to evaluate FOF.In a previous study, 58 semi-structured interviews were conducted with community-dwelling elderly people. The qualitative thematic analysis showed that fear of falling was defined through the functional, social and psychological long-term consequences of falls (Piot-Ziegler et al., 2007).In order to reveal patterns in the expression of fear of falling, an original qualitative thematic pattern analysis (QUAlitative Pattern Analysis - QUAPA) is developed and applied on these interviews.The results of this analysis show an internal coherence across the three dimensions (functional, social and psychological). Four different patterns are found, corresponding to four degrees of fear of falling. They are formalized in a fear of falling intensity model.This model leads to a portrait-evaluation for fallers and non-fallers. The evaluation must be confronted to large samples of elderly people, living in different environments. It presents an original alternative to the concept of self-efficacy to evaluate fear of falling in older people.The model of FOF presented in this article is grounded on elderly people's experience. It gives an experiential description of the three dimensions constitutive of FOF and of their evolution as fear increases, and defines an evaluation tool using situations and wordings based on the elderly people's discourse.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.