995 resultados para Resolution algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the last decade, high-resolution (HR)-MS has been associated with qualitative analyses while triple quadrupole MS has been associated with routine quantitative analyses. However, a shift of this paradigm is taking place: quantitative and qualitative analyses will be increasingly performed by HR-MS, and it will become the common 'language' for most mass spectrometrists. Most analyses will be performed by full-scan acquisitions recording 'all' ions entering the HR-MS with subsequent construction of narrow-width extracted-ion chromatograms. Ions will be available for absolute quantification, profiling and data mining. In parallel to quantification, metabotyping will be the next step in clinical LC-MS analyses because it should help in personalized medicine. This article is aimed to help analytical chemists who perform targeted quantitative acquisitions with triple quadrupole MS make the transition to quantitative and qualitative analyses using HR-MS. Guidelines for the acceptance criteria of mass accuracy and for the determination of mass extraction windows in quantitative analyses are proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The transpressional boundary between the Australian and Pacific plates in the central South Island of New Zealand comprises the Alpine Fault and a broad region of distributed strain concentrated in the Southern Alps but encompassing regions further to the east, including the northwest Canterbury Plains. Low to moderate levels of seismicity (e. g., 2 > M 5 events since 1974 and 2 > M 4.0 in 2009) and Holocene sediments offset or disrupted along rare exposed active fault segments are evidence for ongoing tectonism in the northwest plains, the surface topography of which is remarkably flat and even. Because the geology underlying the late Quaternary alluvial fan deposits that carpet most of the plains is not established, the detailed tectonic evolution of this region and the potential for larger earthquakes is only poorly understood. To address these issues, we have processed and interpreted high-resolution (2.5 m subsurface sampling interval) seismic data acquired along lines strategically located relative to extensive rock exposures to the north, west, and southwest and rare exposures to the east. Geological information provided by these rock exposures offer important constraints on the interpretation of the seismic data. The processed seismic reflection sections image a variably thick layer of generally undisturbed younger (i.e., < 24 ka) Quaternary alluvial sediments unconformably overlying an older (> 59 ka) Quaternary sedimentary sequence that shows evidence of moderate faulting and folding during and subsequent to deposition. These Quaternary units are in unconformable contact with Late Cretaceous-Tertiary interbedded sedimentary and volcanic rocks that are highly faulted, folded, and tilted. The lowest imaged unit is largely reflection-free Permian Triassic basement rocks. Quaternary-age deformation has affected all the rocks underlying the younger alluvial sediments, and there is evidence for ongoing deformation. Eight primary and numerous secondary faults as well as a major anticlinal fold are revealed on the seismic sections. Folded sedimentary and volcanic units are observed in the hanging walls and footwalls of most faults. Five of the primary faults represent plausible extensions of mapped faults, three of which are active. The major anticlinal fold is the probable continuation of known active structure. A magnitude 7.1 earthquake occurred on 4 September 2010 near the southeastern edge of our study area. This predominantly right-lateral strike-slip event and numerous aftershocks (ten with magnitudes >= 5 within one week of the main event) highlight the primary message of our paper: that the generally flat and topographically featureless Canterbury Plains is underlain by a network of active faults that have the potential to generate significant earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 41-year-old male presented with severe frostbite that was monitored clinically and with a new laser Doppler imaging (LDI) camera that records arbitrary microcirculatory perfusion units (1-256 arbitrary perfusion units (APU's)). LDI monitoring detected perfusion differences in hand and foot not seen visually. On day 4-5 after injury, LDI showed that while fingers did not experience any significant perfusion change (average of 31±25 APUs on day 5), the patient's left big toe did (from 17±29 APUs day 4 to 103±55 APUs day 5). These changes in regional perfusion were not detectable by visual examination. On day 53 postinjury, all fingers with reduced perfusion by LDI were amputated, while the toe could be salvaged. This case clearly demonstrates that insufficient microcirculatory perfusion can be identified using LDI in ways which visual examination alone does not permit, allowing prognosis of clinical outcomes. Such information may also be used to develop improved treatment approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study intended to compare bone density and architecture in three groups of women: young women with anorexia nervosa (AN), an age-matched control group of young women, and healthy late postmenopausal women. Three-dimensional peripheral quantitative high resolution computed-tomography (HR-pQCT) at the ultradistal radius, a technology providing measures of cortical and trabecular bone density and microarchitecture, was performed in the three cohorts. Thirty-six women with AN aged 18-30years (mean duration of AN: 5.8years), 83 healthy late postmenopausal women aged 70-81 as well as 30 age-matched healthy young women were assessed. The overall cortical and trabecular bone density (D100), the absolute thickness of the cortical bone (CTh), and the absolute number of trabecules per area (TbN) were significantly lower in AN patients compared with healthy young women. The absolute number of trabecules per area (TbN) in AN and postmenopausal women was similar, but significantly lower than in healthy young women. The comparison between AN patients and post-menopausal women is of interest because the latter reach bone peak mass around the middle of the fertile age span whereas the former usually lose bone before reaching optimal bone density and structure. This study shows that bone mineral density and bone compacta thickness in AN are lower than those in controls but still higher than those in postmenopause. Bone compacta density in AN is similar as in controls. However, bone inner structure in AN is degraded to a similar extent as in postmenopause. This last finding is particularly troubling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECT: To study a scan protocol for coronary magnetic resonance angiography based on multiple breath-holds featuring 1D motion compensation and to compare the resulting image quality to a navigator-gated free-breathing acquisition. Image reconstruction was performed using L1 regularized iterative SENSE. MATERIALS AND METHODS: The effects of respiratory motion on the Cartesian sampling scheme were minimized by performing data acquisition in multiple breath-holds. During the scan, repetitive readouts through a k-space center were used to detect and correct the respiratory displacement of the heart by exploiting the self-navigation principle in image reconstruction. In vivo experiments were performed in nine healthy volunteers and the resulting image quality was compared to a navigator-gated reference in terms of vessel length and sharpness. RESULTS: Acquisition in breath-hold is an effective method to reduce the scan time by more than 30 % compared to the navigator-gated reference. Although an equivalent mean image quality with respect to the reference was achieved with the proposed method, the 1D motion compensation did not work equally well in all cases. CONCLUSION: In general, the image quality scaled with the robustness of the motion compensation. Nevertheless, the featured setup provides a positive basis for future extension with more advanced motion compensation methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Little is known about how human amnesia affects the activation of cortical networks during memory processing. In this study, we recorded high-density evoked potentials in 12 healthy control subjects and 11 amnesic patients with various types of brain damage affecting the medial temporal lobes, diencephalic structures, or both. Subjects performed a continuous recognition task composed of meaningful designs. Using whole-scalp spatiotemporal mapping techniques, we found that, during the first 200 ms following picture presentation, map configuration of amnesics and controls were indistinguishable. Beyond this period, processing significantly differed. Between 200 and 350 ms, amnesic patients expressed different topographical maps than controls in response to new and repeated pictures. From 350 to 550 ms, healthy subjects showed modulation of the same maps in response to new and repeated items. In amnesics, by contrast, presentation of repeated items induced different maps, indicating distinct cortical processing of new and old information. The study indicates that cortical mechanisms underlying memory formation and re-activation in amnesia fundamentally differ from normal memory processing.