974 resultados para Simulated annealing algorithm
Resumo:
OBJECTIVE: The objective of this trial was to assess which type of warm-up has the highest effect on virtual reality (VR) laparoscopy performance. The following warm-up strategies were applied: a hands-on exercise (group 1), a cognitive exercise (group 2), and no warm-up (control, group 3). DESIGN: This is a 3-arm randomized controlled trial. SETTING: The trial was conducted at the department of surgery of the University Hospital Basel in Switzerland. PARTICIPANTS: A total of 94 participants, all laypersons without any surgical or VR experience, completed the study. RESULTS: A total of 96 participants were randomized, 31 to group 1, 31 to group 2, and 32 to group 3. There were 2 postrandomization exclusions. In the multivariate analysis, we found no evidence that the intervention had an effect on VR performance as represented by 6 calculated subscores of accuracy, time, and path length for (1) camera manipulation and (2) hand-eye coordination combined with 2-handed maneuvers (p = 0.795). Neither the comparison of the average of the intervention groups (groups 1 and 2) vs control (group 3) nor the pairwise comparisons revealed any significant differences in VR performance, neither multivariate nor univariate. VR performance improved with increasing performance score in the cognitive exercise warm-up (iPad 3D puzzle) for accuracy, time, and path length in the camera navigation task. CONCLUSIONS: We were unable to show an effect of the 2 tested warm-up strategies on VR performance in laypersons. We are currently designing a follow-up study including surgeons rather than laypersons with a longer warm-up exercise, which is more closely related to the final task.
Resumo:
BACKGROUND: The rotator cuff muscles are the main stabilizer of the glenohumeral joint. After total shoulder arthroplasty using anterior approaches, a dysfunction of the subscapularis muscle has been reported. In the present paper we tested the hypothesis that a deficient subscapularis following total shoulder arthroplasty can induce joint instability. METHODS: To test this hypothesis we have developed an EMG-driven musculoskeletal model of the glenohumeral joint. The model was based on an algorithm that minimizes the difference between measured and predicted muscular activities, while satisfying the mechanical equilibrium of the glenohumeral joint. A movement of abduction in the scapular plane was simulated. We compared a normal and deficient subscapularis. Muscle forces, joint force, contact pattern and humeral head translation were evaluated. FINDINGS: To satisfy the mechanical equilibrium, a deficient subscapularis induced a decrease of the force of the infraspinatus muscle. This force decrease was balanced by an increase of the supraspinatus and middle deltoid. As a consequence, the deficient subscapularis induced an upward migration of the humeral head, an eccentric contact pattern and higher stress within the cement. INTERPRETATION: These results confirm the importance of the suscapularis for the long-term stability of total shoulder arthroplasty.
Resumo:
We have studied the effects of rapid thermal annealing at 1300¿°C on GaN epilayers grown on AlN buffered Si(111) and on sapphire substrates. After annealing, the epilayers grown on Si display visible alterations with craterlike morphology scattered over the surface. The annealed GaN/Si layers were characterized by a range of experimental techniques: scanning electron microscopy, optical confocal imaging, energy dispersive x-ray microanalysis, Raman scattering, and cathodoluminescence. A substantial Si migration to the GaN epilayer was observed in the crater regions, where decomposition of GaN and formation of Si3N4 crystallites as well as metallic Ga droplets and Si nanocrystals have occurred. The average diameter of the Si nanocrystals was estimated from Raman scattering to be around 3¿nm. Such annealing effects, which are not observed in GaN grown on sapphire, are a significant issue for applications of GaN grown on Si(111) substrates when subsequent high-temperature processing is required.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
We consider stochastic partial differential equations with multiplicative noise. We derive an algorithm for the computer simulation of these equations. The algorithm is applied to study domain growth of a model with a conserved order parameter. The numerical results corroborate previous analytical predictions obtained by linear analysis.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
Animal dispersal in a fragmented landscape depends on the complex interaction between landscape structure and animal behavior. To better understand how individuals disperse, it is important to explicitly represent the properties of organisms and the landscape in which they move. A common approach to modelling dispersal includes representing the landscape as a grid of equal sized cells and then simulating individual movement as a correlated random walk. This approach uses a priori scale of resolution, which limits the representation of all landscape features and how different dispersal abilities are modelled. We develop a vector-based landscape model coupled with an object-oriented model for animal dispersal. In this spatially explicit dispersal model, landscape features are defined based on their geographic and thematic properties and dispersal is modelled through consideration of an organism's behavior, movement rules and searching strategies (such as visual cues). We present the model's underlying concepts, its ability to adequately represent landscape features and provide simulation of dispersal according to different dispersal abilities. We demonstrate the potential of the model by simulating two virtual species in a real Swiss landscape. This illustrates the model's ability to simulate complex dispersal processes and provides information about dispersal such as colonization probability and spatial distribution of the organism's path
Resumo:
We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow.
Resumo:
The aim of this study was to evaluate the forensic protocol recently developed by Qiagen for the QIAsymphony automated DNA extraction platform. Samples containing low amounts of DNA were specifically considered, since they represent the majority of samples processed in our laboratory. The analysis of simulated blood and saliva traces showed that the highest DNA yields were obtained with the maximal elution volume available for the forensic protocol, that is 200 ml. Resulting DNA extracts were too diluted for successful DNA profiling and required a concentration. This additional step is time consuming and potentially increases inversion and contamination risks. The 200 ml DNA extracts were concentrated to 25 ml, and the DNA recovery estimated with real-time PCR as well as with the percentage of SGM Plus alleles detected. Results using our manual protocol, based on the QIAamp DNA mini kit, and the automated protocol were comparable. Further tests will be conducted to determine more precisely DNA recovery, contamination risk and PCR inhibitors removal, once a definitive procedure, allowing the concentration of DNA extracts from low yield samples, will be available for the QIAsymphony.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
Soil infiltration is a key link of the natural water cycle process. Studies on soil permeability are conducive for water resources assessment and estimation, runoff regulation and management, soil erosion modeling, nonpoint and point source pollution of farmland, among other aspects. The unequal influence of rainfall duration, rainfall intensity, antecedent soil moisture, vegetation cover, vegetation type, and slope gradient on soil cumulative infiltration was studied under simulated rainfall and different underlying surfaces. We established a six factor-model of soil cumulative infiltration by the improved back propagation (BP)-based artificial neural network algorithm with a momentum term and self-adjusting learning rate. Compared to the multiple nonlinear regression method, the stability and accuracy of the improved BP algorithm was better. Based on the improved BP model, the sensitive index of these six factors on soil cumulative infiltration was investigated. Secondly, the grey relational analysis method was used to individually study grey correlations among these six factors and soil cumulative infiltration. The results of the two methods were very similar. Rainfall duration was the most influential factor, followed by vegetation cover, vegetation type, rainfall intensity and antecedent soil moisture. The effect of slope gradient on soil cumulative infiltration was not significant.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
We develop an algorithm to simulate a Gaussian stochastic process that is non-¿-correlated in both space and time coordinates. The colored noise obeys a linear reaction-diffusion Langevin equation with Gaussian white noise. This equation is exactly simulated in a discrete Fourier space.
Resumo:
We herein present a preliminary practical algorithm for evaluating complementary and alternative medicine (CAM) for children which relies on basic bioethical principles and considers the influence of CAM on global child healthcare. CAM is currently involved in almost all sectors of pediatric care and frequently represents a challenge to the pediatrician. The aim of this article is to provide a decision-making tool to assist the physician, especially as it remains difficult to keep up-to-date with the latest developments in the field. The reasonable application of our algorithm together with common sense should enable the pediatrician to decide whether pediatric (P)-CAM represents potential harm to the patient, and allow ethically sound counseling. In conclusion, we propose a pragmatic algorithm designed to evaluate P-CAM, briefly explain the underlying rationale and give a concrete clinical example.