993 resultados para RAY-TRACING ALGORITHM
Resumo:
Proteolytic activity is an important virulence factor for Candida albicans (C. albicans). It is attributed to the family of the secreted aspartic proteinases (Saps) from C. albicans with a minimum of 10 members. Saps show controlled expression and regulation for the individual stages of the infection process. Distinct isoenzymes can be responsible for adherence and tissue damage of local infections, while others cause systemic diseases. Earlier, only the structures of Sap2 and Sap3 were known. In our research, we have now succeeded in solving the X-ray crystal structures of the apoenzyme of Sap1 and Sap5 in complex with pepstatin A at 2.05 and 2.5 A resolution, respectively. With the structure of Sap1, we have completed the set of structures of isoenzyme subgroup Sap1-3. Of subgroup Sap4-6, the structure of the enzyme Sap5 is the first structure that has been described up to now. This facilitates comparison of structural details as well as inhibitor binding modes among the different subgroup members. Structural analysis reveals a highly conserved overall secondary structure of Sap1-3 and Sap5. However, Sap5 clearly differs from Sap1-3 by its electrostatic overall charge as well as through structural conformation of its entrance to the active site cleft. Design of inhibitors specific for Sap5 should concentrate on the S4 and S3 pockets, which significantly differ from Sap1-3 in size and electrostatic charge. Both Sap1 and Sap5 seem to play a major part in superficial Candida infections. Determination of the isoenzymes' structures can contribute to the development of new Sap-specific inhibitors for the treatment of superficial infections with a structure-based drug design program.
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
The subject of this project is about “Energy Dispersive X-Ray Fluorescence ” (EDXRF).This technique can be used for a tremendous variety of elemental analysis applications.It provides one of the simplest, most accurate and most economic analytical methods for thedetermination of the chemical composition of many types of materials.The purposes of this project are:- To give some basic information about Energy Dispersive X-ray Fluorescence.- To perform qualitative and quantitative analysis of different samples (water-dissolutions,powders, oils,..) in order to define the sensitivity and detection limits of the equipment.- To make a comprehensive and easy-to-use manual of the ‘ARL QUANT’X EnergyDispersive X-Ray Fluorescence’ apparatus
Resumo:
Algoritmo que optimiza y crea pairings para tripulaciones de líneas aéreas mediante la posterior programación en Java.
Resumo:
We are going to implement the "GA-SEFS" by Tsymbal and analyse experimentally its performance depending on the classifier algorithms used in the fitness function (NB, MNge, SMO). We are also going to study the effect of adding to the fitness function a measure to control complexity of the base classifiers.
Resumo:
PURPOSE: Most RB1 mutations are unique and distributed throughout the RB1 gene. Their detection can be time-consuming and the yield especially low in cases of conservatively-treated sporadic unilateral retinoblastoma (Rb) patients. In order to identify patients with true risk of developing Rb, and to reduce the number of unnecessary examinations under anesthesia in all other cases, we developed a universal sensitive, efficient and cost-effective strategy based on intragenic haplotype analysis. METHODS: This algorithm allows the calculation of the a posteriori risk of developing Rb and takes into account (a) RB1 loss of heterozygosity in tumors, (b) preferential paternal origin of new germline mutations, (c) a priori risk derived from empirical data by Vogel, and (d) disease penetrance of 90% in most cases. We report the occurrence of Rb in first degree relatives of patients with sporadic Rb who visited the Jules Gonin Eye Hospital, Lausanne, Switzerland, from January 1994 to December 2006 compared to expected new cases of Rb using our algorithm. RESULTS: A total of 134 families with sporadic Rb were enrolled; testing was performed in 570 individuals and 99 patients younger than 4 years old were identified. We observed one new case of Rb. Using our algorithm, the cumulated total a posteriori risk of recurrence was 1.77. CONCLUSIONS: This is the first time that linkage analysis has been validated to monitor the risk of recurrence in sporadic Rb. This should be a useful tool in genetic counseling, especially when direct RB1 screening for mutations leaves a negative result or is unavailable.
Resumo:
The care for a patient with ulcerative colitis (UC) remains challenging despite the fact that morbidity and mortality rates have been considerably reduced during the last 30 years. The traditional management with intravenous corticosteroids was modified by the introduction of ciclosporin and infliximab. In this review, we focus on the treatment of patients with moderate to severe UC. Four typical clinical scenarios are defined and discussed in detail. The treatment recommendations are based on current literature, published guidelines and reviews, and were discussed at a consensus meeting of Swiss experts in the field. Comprehensive treatment algorithms were developed, aimed for daily clinical practice.
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
In silico screening has become a valuable tool in drug design, but some drug targets represent real challenges for docking algorithms. This is especially true for metalloproteins, whose interactions with ligands are difficult to parametrize. Our docking algorithm, EADock, is based on the CHARMM force field, which assures a physically sound scoring function and a good transferability to a wide range of systems, but also exhibits difficulties in case of some metalloproteins. Here, we consider the therapeutically important case of heme proteins featuring an iron core at the active site. Using a standard docking protocol, where the iron-ligand interaction is underestimated, we obtained a success rate of 28% for a test set of 50 heme-containing complexes with iron-ligand contact. By introducing Morse-like metal binding potentials (MMBP), which are fitted to reproduce density functional theory calculations, we are able to increase the success rate to 62%. The remaining failures are mainly due to specific ligand-water interactions in the X-ray structures. Testing of the MMBP on a second data set of non iron binders (14 cases) demonstrates that they do not introduce a spurious bias towards metal binding, which suggests that they may reliably be used also for cross-docking studies.
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
A basic prerequisite for in vivo X-ray imaging of the lung is the exact determination of radiation dose. Achieving resolutions of the order of micrometres may become particularly challenging owing to increased dose, which in the worst case can be lethal for the imaged animal model. A framework for linking image quality to radiation dose in order to optimize experimental parameters with respect to dose reduction is presented. The approach may find application for current and future in vivo studies to facilitate proper experiment planning and radiation risk assessment on the one hand and exploit imaging capabilities on the other.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
OBJECTIVE: A new tool to quantify visceral adipose tissue (VAT) over the android region of a total body dual-energy x-ray absorptiometry (DXA) scan has recently been reported. The measurement, CoreScan, is currently available on Lunar iDXA densitometers. The purpose of the study was to determine the precision of the CoreScan VAT measurement, which is critical for understanding the utility of this measure in longitudinal trials. DESIGN AND METHODS: VAT precision was characterized in both an anthropomorphic imaging phantom (measured on 10 Lunar iDXA systems) and a clinical population consisting of obese women (n = 32). RESULTS: The intrascanner precision for the VAT phantom across 9 quantities of VAT mass (0-1,800 g) ranged from 28.4 to 38.0 g. The interscanner precision ranged from 24.7 to 38.4 g. There was no statistical dependence on the quantity of VAT for either the inter- or intrascanner precision result (p = 0.670). Combining inter- and intrascanner precision yielded a total phantom precision estimate of 47.6 g for VAT mass, which corresponds to a 4.8% coefficient of variance (CV) for a 1 kg VAT mass. Our clinical population, who completed replicate total body scans with repositioning between scans, showed a precision of 56.8 g on an average VAT mass of 1110.4 g. This corresponds to a 5.1% CV. Hence, the in vivo precision result was similar to the phantom precision result. CONCLUSIONS: The study suggests that CoreScan has a relatively low precision error in both phantoms and obese women and therefore may be a useful addition to clinical trials where interventions are targeted towards changes in visceral adiposity.