893 resultados para Electrical impedance tomography, Calderon problem, factorization method
Resumo:
ABSTRACT (FRENCH)Ce travail de thèse basé sur le système visuel chez les sujets sains et chez les patients schizophrènes, s'articule autour de trois articles scientifiques publiés ou en cours de publication. Ces articles traitent des sujets suivants : le premier article présente une nouvelle méthode de traitement des composantes physiques des stimuli (luminance et fréquence spatiale). Le second article montre, à l'aide d'analyses de données EEG, un déficit de la voie magnocellulaire dans le traitement visuel des illusions chez les patients schizophrènes. Ceci est démontré par l'absence de modulation de la composante PI chez les patients schizophrènes contrairement aux sujets sains. Cette absence est induite par des stimuli de type illusion Kanizsa de différentes excentricités. Finalement, le troisième article, également à l'aide de méthodes de neuroimagerie électrique (EEG), montre que le traitement des contours illusoires se trouve dans le complexe latéro-occipital (LOC), à l'aide d'illusion « misaligned gratings ». De plus il révèle que les activités démontrées précédemment dans les aires visuelles primaires sont dues à des inférences « top- down ».Afin de permettre la compréhension de ces trois articles, l'introduction de ce manuscrit présente les concepts essentiels. De plus des méthodes d'analyses de temps-fréquence sont présentées. L'introduction est divisée en quatre parties : la première présente le système visuel depuis les cellules retino-corticales aux deux voix du traitement de l'information en passant par les régions composant le système visuel. La deuxième partie présente la schizophrénie par son diagnostic, ces déficits de bas niveau de traitement des stimuli visuel et ces déficits cognitifs. La troisième partie présente le traitement des contours illusoires et les trois modèles utilisés dans le dernier article. Finalement, les méthodes de traitement des données EEG seront explicitées, y compris les méthodes de temps-fréquences.Les résultats des trois articles sont présentés dans le chapitre éponyme (du même nom). De plus ce chapitre comprendra les résultats obtenus à l'aide des méthodes de temps-fréquenceFinalement, la discussion sera orientée selon trois axes : les méthodes de temps-fréquence ainsi qu'une proposition de traitement de ces données par une méthode statistique indépendante de la référence. La discussion du premier article en montrera la qualité du traitement de ces stimuli. La discussion des deux articles neurophysiologiques, proposera de nouvelles d'expériences afin d'affiner les résultats actuels sur les déficits des schizophrènes. Ceci pourrait permettre d'établir un marqueur biologique fiable de la schizophrénie.ABSTRACT (ENGLISH)This thesis focuses on the visual system in healthy subjects and schizophrenic patients. To address this research, advanced methods of analysis of electroencephalographic (EEG) data were used and developed. This manuscript is comprised of three scientific articles. The first article showed a novel method to control the physical features of visual stimuli (luminance and spatial frequencies). The second article showed, using electrical neuroimaging of EEG, a deficit in spatial processing associated with the dorsal pathway in chronic schizophrenic patients. This deficit was elicited by an absent modulation of the PI component in terms of response strength and topography as well as source estimations. This deficit was orthogonal to the preserved ability to process Kanizsa-type illusory contours. Finally, the third article resolved ongoing debates concerning the neural mechanism mediating illusory contour sensitivity by using electrical neuroimaging to show that the first differentiation of illusory contour presence vs. absence is localized within the lateral occipital complex. This effect was subsequent to modulations due to the orientation of misaligned grating stimuli. Collectively, these results support a model where effects in V1/V2 are mediated by "top-down" modulation from the LOC.To understand these three articles, the Introduction of this thesis presents the major concepts used in these articles. Additionally, a section is devoted to time-frequency analysis methods not presented in the articles themselves. The introduction is divided in four parts. The first part presents three aspects of the visual system: cellular, regional, and its functional interactions. The second part presents an overview of schizophrenia and its sensoiy-cognitive deficits. The third part presents an overview of illusory contour processing and the three models examined in the third article. Finally, advanced analysis methods for EEG are presented, including time- frequency methodology.The Introduction is followed by a synopsis of the main results in the articles as well as those obtained from the time-frequency analyses.Finally, the Discussion chapter is divided along three axes. The first axis discusses the time frequency analysis and proposes a novel statistical approach that is independent of the reference. The second axis contextualizes the first article and discusses the quality of the stimulus control and direction for further improvements. Finally, both neurophysiologic articles are contextualized by proposing future experiments and hypotheses that may serve to improve our understanding of schizophrenia on the one hand and visual functions more generally.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Tässä insinöörityössä suunniteltiin Helsingin ammattikorkeakoululle jakeluverkoissa tapahtuvien oikosulkujen symmetristen komponenttien laskutavan havainnollistamiseen sopiva sijaiskytkentä. Sijaiskytkennässä tärkeitä huomioitavia asioita olivat mm. jännitetaso, havainnollistavien muuntajien oikosulkukestoisuus, jatkokäyttö laboratoriotyönä ja yleinen havainnollistavuus. Työssä on aluksi perehdytty symmetristen komponenttien ja jakeluverkoissa tapahtuvien oikosulkujen teoriaan. Tämän jälkeen mitoitettiin tarvittavan kytkennän komponenttien jännite- ja virtakestoisuudet mahdolliset lisäkäytöt huomioiden. Näiden rajoitusten mukaan perusteella työtä ruvettiin toteuttamaan. Työssä tilattiin sähkön 40 V:n pääjännitetasolle alentava muuntaja syöttämään oikosulun kestävää muuntajaa, jolla simuloitiin jakeluverkon yleisimpiä vikatyyppejä. Jälkimmäiselle muuntajalle mitoitettiin ja hankittiin sisäistä impedanssia vastaava induktanssi. Tämän avulla rakennettiin kokonaisuus, jonka avulla voidaan simuloida kaikkia tapahtuvia oikosulkuja vastaavat sijaiskytkennät. Työhön jätettiin kehittämisvaraa ja muita laboratoriotyön rakentamismahdollisuuksia tulevien insinööritöiden tekijöille.
Resumo:
The development of CT applications might become a public health problem if no effort is made on the justification and the optimisation of the examinations. This paper presents some hints to assure that the risk-benefit compromise remains in favour of the patient, especially when one deals with the examinations of young patients. In this context a particular attention has to be made on the justification of the examination. When performing the acquisition one needs to optimise the extension of the volume investigated together with the number of acquisition sequences used. Finally, the use of automatic exposure systems, now available on all the units, and the use of the Diagnostic Reference Levels (DRL) should allow help radiologists to control the exposure of their patients.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Resumo:
We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.
Resumo:
This paper presents a simple Optimised Search Heuristic for the Job Shop Scheduling problem that combines a GRASP heuristic with a branch-and-bound algorithm. The proposed method is compared with similar approaches and leads to better results in terms of solution quality and computing times.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in cylindrical coordinates. An important application of this method is the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh consisting of three concentric domains representing the borehole fluid in the center, the borehole casing and the surrounding porous formation. The spatial discretization is based on a Chebyshev expansion in the radial direction, Fourier expansions in the other directions, and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method based on the method of characteristics is used to match the boundary conditions at the fluid/porous-solid and porous-solid/porous-solid interfaces. The viability and accuracy of the proposed method has been tested and verified in 2D polar coordinates through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. The proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is handled adequately.
Resumo:
Within a drift-diffusion model we investigate the role of the self-consistent electric field in determining the impedance field of a macroscopic Ohmic (linear) resistor made by a compensated semi-insulating semiconductor at arbitrary values of the applied voltage. The presence of long-range Coulomb correlations is found to be responsible for a reshaping of the spatial profile of the impedance field. This reshaping gives a null contribution to the macroscopic impedance but modifies essentially the transition from thermal to shot noise of a macroscopic linear resistor. Theoretical calculations explain a set of noise experiments carried out in semi-insulating CdZnTe detectors.
Resumo:
AIMS: We studied the respective added value of the quantitative myocardial blood flow (MBF) and the myocardial flow reserve (MFR) as assessed with (82)Rb positron emission tomography (PET)/CT in predicting major adverse cardiovascular events (MACEs) in patients with suspected myocardial ischaemia. METHODS AND RESULTS: Myocardial perfusion images were analysed semi-quantitatively (SDS, summed difference score) and quantitatively (MBF, MFR) in 351 patients. Follow-up was completed in 335 patients and annualized MACE (cardiac death, myocardial infarction, revascularization, or hospitalization for congestive heart failure or de novo stable angor) rates were analysed with the Kaplan-Meier method in 318 patients after excluding 17 patients with early revascularizations (<60 days). Independent predictors of MACEs were identified by multivariate analysis. During a median follow-up of 624 days (inter-quartile range 540-697), 35 MACEs occurred. An annualized MACE rate was higher in patients with ischaemia (SDS >2) (n = 105) than those without [14% (95% CI = 9.1-22%) vs. 4.5% (2.7-7.4%), P < 0.0001]. The lowest MFR tertile group (MFR <1.8) had the highest MACE rate [16% (11-25%) vs. 2.9% (1.2-7.0%) and 4.3% (2.1-9.0%), P < 0.0001]. Similarly, the lowest stress MBF tertile group (MBF <1.8 mL/min/g) had the highest MACE rate [14% (9.2-22%) vs. 7.3% (4.2-13%) and 1.8% (0.6-5.5%), P = 0.0005]. Quantitation with stress MBF or MFR had a significant independent prognostic power in addition to semi-quantitative findings. The largest added value was conferred by combining stress MBF to SDS. This holds true even for patients without ischaemia. CONCLUSION: Perfusion findings in (82)Rb PET/CT are strong MACE outcome predictors. MBF quantification has an added value allowing further risk stratification in patients with normal and abnormal perfusion images.
Resumo:
This clinical study was based on experimental results obtained in nude mice grafted with human colon carcinoma, showing that injected 131I-labeled F(ab')2 and Fab fragments from high affinity anti-carcinoembryonic antigen (CEA) monoclonal antibodies (MAb) gave markedly higher ratios of tumor to normal tissue localization than intact MAb. 31 patients with known colorectal carcinoma, including 10 primary tumors, 13 local tumor recurrences, and 21 metastatic involvements, were injected with 123I-labeled F(ab')2 (n = 14) or Fab (n = 17) fragments from MAb anti-CEA. The patients were examined by emission-computerized tomography (ECT) at 6, 24, and sometimes 48 h after injection using a rotating dual head scintillation camera. All 23 primary tumors and local recurrences except one were clearly visualized on at least two sections of different tomographic planes. Interestingly, nine of these patients had almost normal circulating CEA levels, and three of the visualized tumors weighed only 3-5 g. Among 19 known metastatic tumor involvements, 14 were correctly localized by ECT. Two additional liver and several bone metastases were discovered by immunoscintigraphy. Altogether, 86% of the tumor sites were detected, 82% with F(ab')2 and 89% with Fab fragments. The contrast of the tumor images obtained with Fab fragments suggests that this improved method of immunoscintigraphy has the potential to detect early tumor recurrences and thus to increase the survival of patients. The results of this retrospective study, however, should be confirmed in a prospective study before this method can be recommended for the routine diagnosis of cancer.