959 resultados para efficient algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: To demonstrate that the evaluation of erythrocyte dysmorphism by light microscopy with lowering of the condenser lens (LMLC) is useful to identify patients with a haematuria of glomerular or non-glomerular origin. Methods: A comparative double-blind study between phase contrast microscopy (PCM) and LMLC is reported to evaluate the efficacy of these techniques. Urine samples of 39 patients followed up for 9 months were analyzed, and classified as glomerular and non-glomerular haematuria. The different microscopic techniques were compared using receiver-operator curve (ROC) analysis and area under curve (AUC). Reproducibility was assessed by coefficient of variation (CV). Results: Specific cut-offs were set for each method according to their best rate of specificity and sensitivity as follows: 30% for phase contrast microscopy and 40% for standard LMLC, reaching in the first method the rate of 95% and 100% of sensitivity and specificity, respectively, and in the second method the rate of 90% and 100% of sensitivity and specificity, respectively. In ROC analysis, AUC for PCM was 0.99 and AUC for LMLC was 0.96. The CV was very similar in glomerular haematuria group for PCM (35%) and LMLC (35.3%). Conclusion: LMLC proved to be effective in contributing to the direction of investigation of haematuria, toward the nephrological or urological side. This method can substitute PCM when this equipment is not available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Described in this article is a novel device that facilitates study of the cross-sectional anatomy of the human head. In designing our device, we aimed to protect sections of the head from the destructive action of handling during anatomy laboratory while also ensuring excellent visualization of the anatomic structures. We used an electric saw to create 15-mm sections of three cadaver heads in the three traditional anatomic planes and inserted each section into a thin, perforated display box made of transparent acrylic material. The thin display boxes with head sections are kept in anatomical order in a larger transparent acrylic storage box containing formaldehyde solution, which preserves the specimens but also permits direct observation of the structures and their anatomic relationships to each other. This box-within-box design allows students to easily view sections of a head in its anatomical position as well as to examine internal structures by manipulating individual display boxes without altering the integrity of the preparations. This methodology for demonstrating cross-section anatomy allows efficient use of cadaveric material and technician time while also giving learners the best possible handling and visualization of complex anatomic structures. Our approach to teaching cross-sectional anatomy of the head can be applied to any part of human body, and the value of our device design will only increase as more complicated understandings of cross-sectional anatomy are required by advances and proliferation of imaging technology. Anat Sci Educ 3: 141-143, 2010. (C) 2010 American Association of Anatomists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incremental parsing has long been recognized as a technique of great utility in the construction of language-based editors, and correspondingly, the area currently enjoys a mature theory. Unfortunately, many practical considerations have been largely overlooked in previously published algorithms. Many user requirements for an editing system necessarily impact on the design of its incremental parser, but most approaches focus only on one: response time. This paper details an incremental parser based on LR parsing techniques and designed for use in a modeless syntax recognition editor. The nature of this editor places significant demands on the structure and quality of the document representation it uses, and hence, on the parser. The strategy presented here is novel in that both the parser and the representation it constructs are tolerant of the inevitable and frequent syntax errors that arise during editing. This is achieved by a method that differs from conventional error repair techniques, and that is more appropriate for use in an interactive context. Furthermore, the parser aims to minimize disturbance to this representation, not only to ensure other system components can operate incrementally, but also to avoid unfortunate consequences for certain user-oriented services. The algorithm is augmented with a limited form of predictive tree-building, and a technique is presented for the determination of valid symbols for menu-based insertion. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new strategy has been developed for the rapid synthesis of peptide para-nitroanilides (pNA). The method involves derivatization of commercially available tritylchloride resin (TCP-resin) with 1,4-phenylenediamine, subsequent coupling with desired amino acids by the standard Fmoc protocol, and oxidation of the intermediate para-aminoanilides (pAA) with Oxone(R). This procedure allows easy assembly of the desired para-aminoanilides (pAA) on standard resin and efficient oxidation and purification of the corresponding para-nitroanilides (pNA). The method allows easy access to any desired peptide para-nitroanilides, which are useful substrates for the characterization and study of proteolytic enzymes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.