960 resultados para Improved sequential algebraic algorithm
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Whereas during the last few years handling of the transcutaneous PO2 (tcPO2) and PCO2 (tcPCO2) sensor has been simplified, the high electrode temperature and the short application time remain major drawbacks. In order to determine whether the application of a topical metabolic inhibitor allows reliable measurement at a sensor temperature of 42 degrees C for a period of up to 12 h, we performed a prospective, open, nonrandomized study in a sequential sample of 20 critically ill neonates. A total of 120 comparisons (six repeated measurements per patient) between arterial and transcutaneous values were obtained. Transcutaneous values were measured with a control sensor at 44 degrees C (conventional contact medium, average application time 3 h) and a test sensor at 42 degrees C (Eugenol solution, average application time 8 h). Comparison of tcPO2 and PaO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.16 kPa (range +1.60 to -2.00 kPa), limits of agreement +1.88 and -1.56 kPa. Comparison of tcPO2 and PaO2 at 44 degrees C (control sensor) revealed a mean difference of +0.02 kPa (range +2.60 to -1.90 kPa), limits of agreement +2.12 and -2.08 kPa. Comparison of tcPCO2 and PaCO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.91 (range +2.30 to +0.10 kPa), limits of agreement +2.24 and -0.42 kPa. Comparison of tcPCO2 and PaCO2 at 44 degrees C (control sensor) revealed a mean difference of +0.63 kPa (range 1.50 to -0.30 kPa), limits of agreement +1.73 and -0.47 kPa. CONCLUSION: Our results show that the use of an Eugenol solution allows reliable measurement of tcPO2 at a heating temperature of 42 degrees C; the application time can be prolongued up to a maximum of 12 h without aggravating the skin lesions. The performance of the tcPCO2 monitor was slightly worse at 42 degrees C than at 44 degrees C suggesting that for the Eugenol solution the metabolic offset should be corrected.
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.
Resumo:
PURPOSE: Chemotherapy (CT) combined with radiation therapy (RT) is the standard treatment for limited disease small-cell lung cancer (LDSCLC). Many questions including RT dose, fractionation, and sequence of RT/CT administration remain controversial. In this paper, we retrospectively assessed the outcome of patients with LDSCLC treated with radiation of at least 50 Gy.METHODS AND MATERIALS: From December 1997 to January 2006, 69 consecutive patients with LDSCLC were treated at our institutions. Treatment consisted of at least 4 cycles of CT, and 3D conformal thoracic RT. The median age was 61 years (range, 37-78 years). Sequential or concomitant CT/RT was given in 47 (68%) and 22 (32%) of the patients, respectively. The median RT dose was 60 Gy. Prophylactic cranial irradiation (PCI) was administered in 47 (68%) patients.RESULTS: With a median follow-up of 36 months (range, 6-107), 16 patients were alive without disease. The median overall survival time was 24 months, with a 3-year survival rate of 29%. The 3-year disease-free survival (DFS) and loco-regional control (LRC) rates were 23% and 60%, respectively. A better DFS was significantly associated with performance status (PS) 0 (p = 0.004), complete response to treatment (p = 0.03), and PCI group (p = 0.03). A trend towards improved overall survival (OS) was observed for patients who underwent PCI (p = 0.07). Patients treated with sequential CT/RT had a better outcome than those treated with concomitant treatment (3-year DFS rate 27% vs. 13%; p = 0.04). However, PCI was delivered more frequently for the sequential group. No significant dose-response relationship was found in terms of LRC. The multivariate analysis showed that complete response to treatment was the only significant factor for OS.CONCLUSION: Complete response to treatment was the most important factor for OS. A better DFS was significantly associated with the PCI group. We did not find a significant difference in outcome between patients receiving doses of 60 Gy or more and patients receiving 60 Gy or less.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
A 54-year-old patient who had an isolated small polar thalamic infarct and acute global amnesia with slight frontal type dysfunction but without other neurological dysfunction was studied. Memory improved partially within 8 months. At all stages the impairment was more severe for verbal than non-verbal memory. Autobiographic recollections and newly acquired information tended to be disorganised with respect to temporal order. Procedural memory was unaffected. Both emotional involvement and pleasure in reading were lost. On MRI, the infarct was limited to the left anterior thalamic nuclei and the adjacent mamillothalamic tract. The regional cerebral metabolic rate of glucose (measured with PET) was decreased on the left in the thalamus, amygdala, and posterior cingulate cortex 2 weeks after the infarct, and in the thalamus and posterior cingulate cortex 9 months later. These findings stress the specific role of the left anterior thalamic region in memory and confirm that longlasting amnesia from a thalamic lesion can occur without significant structural damage to the dorsomedial nucleus. Furthermore, they suggest that the anterior thalamic nuclei and possibly their connections with the posterior cingulate cortex play a role in emotional involvement linked to ipsilateral hemispheric functions.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Genetic diversity between improved banana diploids using canonical variables and the Ward-MLM method
Resumo:
The objective of this work was to estimate the genetic diversity of improved banana diploids using data from quantitative analysis and from simple sequence repeats (SSR) marker, simultaneously. The experiment was carried out with 33 diploids, in an augmented block design with 30 regular treatments and three common ones. Eighteen agronomic characteristics and 20 SSR primers were used. The agronomic characteristics and the SSR were analyzed simultaneously by the Ward-MLM, cluster, and IML procedures. The Ward clustering method considered the combined matrix obtained by the Gower algorithm. The Ward-MLM procedure identified three ideal groups (G1, G2, and G3) based on pseudo-F and pseudo-t² statistics. The dendrogram showed relative similarity between the G1 genotypes, justified by genealogy. In G2, 'Calcutta 4' appears in 62% of the genealogies. Similar behavior was observed in G3, in which the 028003-01 diploid is the male parent of the 086079-10 and 042079-06 genotypes. The method with canonical variables had greater discriminatory power than Ward-MLM. Although reduced, the genetic variability available is sufficient to be used in the development of new hybrids.
Resumo:
Peer-reviewed
Resumo:
This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.
Resumo:
Background: To evaluate the safety of immediate sequential bilateral cataract extraction (ISBCE) with respect to indications, visual outcomes, complications, benefits and disadvantages. Methods: This is a retrospective review of all ISBCEs performed at Kantonsspital Winterthur, Switzerland, between April 2000 and September 2013. The case notes of 500 eyes of 250 patients were reviewed. Of these 500 eyes, 472 (94.4%) had a straight forward phacoemulsification with posterior chamber intraocular lens implantation; 21 (4.2%) had a planned extracapsular cataract extraction; 4 (0.8%) had an intracapsular cataract extraction and 3 (0.6%) had a combined phacoemulsification with trabeculectomy. Results: Over 66% of eyes achieved improved visual acuity (at least 3 Snellen lines) following ISBCE. Median preoperative best corrected visual acuity (BCVA) was 0.5 LogMAR; the interquartile range was [0.4, 1] LogMAR. At one week control the median BCVA was 0.3 LogMAR, IQR [0.1, 0.5] LogMAR. At one month the median BCVA was 0.15 LogMAR, IQR [0.05, 0.3] (p < 0.01). There were no sight-threatening intraoperative or postoperative complications observed. Conclusions: ISBCE is an effective and safe option with high degree of patient satisfaction. The relative benefits of ISBCE should be balanced against the theoretically enhanced risks.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.