999 resultados para PDP-II (Computer)
Resumo:
BACKGROUND: In women with chronic anovulation, the choice of the FSH starting dose and the modality of subsequent dose adjustments are critical in controlling the risk of overstimulation. The aim of this prospective randomized study was to assess the efficacy and safety of a decremental FSH dose regimen applied once the leading follicle was 10-13 mm in diameter in women treated for WHO Group II anovulation according to a chronic low-dose (CLD; 75 IU FSH for 14 days with 37.5 IU increment) step-up protocol. METHODS: Two hundred and nine subfertile women were treated with recombinant human FSH (r-hFSH) (Gonal-f) for ovulation induction according to a CLD step-up regimen. When the leading follicle reached a diameter of 10-13 mm, 158 participants were randomized by means of a computer-generated list to receive either the same FSH dose required to achieve the threshold for follicular development (CLD regimen) or half of this FSH dose [sequential (SQ) regimen]. HCG was administered only if not more than three follicles >or=16 mm in diameter were present and/or serum estradiol (E(2)) values were <1200 pg/ml. The primary outcome measure was the number of follicles >or=16 mm in size at the time of hCG administration. RESULTS: Clinical characteristics and ovarian parameters at the time of randomization were similar in the two groups. Both CLD and SQ protocols achieved similar follicular growth as regards the total number of follicles and medium-sized or mature follicles (>/=16 mm: 1.5 +/- 0.9 versus 1.4 +/- 0.7, respectively). Furthermore, serum E(2) levels were equivalent in the two groups at the time of hCG administration (441 +/- 360 versus 425 +/- 480 pg/ml for CLD and SQ protocols, respectively). The rate of mono-follicular development was identical as well as the percentage of patients who ovulated and achieved pregnancy. CONCLUSIONS: The results show that the CLD step-up regimen for FSH administration is efficacious and safe for promoting mono-follicular ovulation in women with WHO Group II anovulation. This study confirms that maintaining the same FSH starting dose for 14 days before increasing the dose in step-up regimen is critical to adequately control the risk of over-response. Strict application of CLD regimen should be recommended in women with WHO Group II anovulation.
Resumo:
A laboratory study was performed to assess the influence of beveling the margins of cavities and the effects on marginal adaptation of the application of ultrasound during setting and initial light curing. After minimal access cavities had been prepared with an 80 microm diamond bur, 80 box-only Class II cavities were prepared mesially and distally in 40 extracted human molars using four different oscillating diamond coated instruments: (A) a U-shaped PCS insert as the non-beveled control (EMS), (B) Bevelshape (Intensiv), (C) SonicSys (KaVo) and (D) SuperPrep (KaVo). In groups B-D, the time taken for additional bevel finishing was measured. The cavities were filled with a hybrid composite material in three increments. Ultrasound was also applied to one cavity per tooth before and during initial light curing (10 seconds). The specimens were subjected to thermomechanical stress in a computer-controlled masticator device. Marginal quality was assessed by scanning electron microscopy and the results were compared statistically. The additional time required for finishing was B > D > C (p < or = 0.05). In all groups, thermomechanical loading resulted in a decrease in marginal quality. Beveling resulted in higher values for "continuous" margins compared with that of the unbeveled controls. The latter showed better marginal quality at the axial walls when ultrasound was used. Beveling seems essential for good marginal adaptation but requires more preparation time. The use of ultrasonic vibrations may improve the marginal quality of unbeveled fillings and warrants further investigation.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
Neural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology.
Resumo:
Objective In order to benefit from the obvious advantages of minimally invasive liver surgery there is a need to develop high precision tools for intraoperative anatomical orientation, navigation and safety control. In a pilot study we adapted a newly developed system for computer-assisted liver surgery (CALS) in terms of accuracy and technical feasibility to the specific requirements of laparoscopy. Here, we present practical aspects related to laparoscopic computer assisted liver surgery (LCALS). Methods Our video relates to a patient presenting with 3 colorectal liver metastases in Seg. II, III and IVa who was selected in an appropriate oncological setting for LCALS using the CAScination system combined with 3D MEVIS reconstruction. After minimal laparoscopic mobilization of the liver, a 4- landmark registration method was applied to enable navigation. Placement of microwave needles was performed using the targeting module of the navigation system and correct needle positioning was confirmed by intraoperative sonography. Ablation of each lesion was carried out by application of microwave energy at 100 Watts for 1 minute. Results To acquire an accurate (less 0.5 cm) registration, 4 registration cycles were necessary. In total, seven minutes were required to accomplish precise registration. Successful ablation with complete response in all treated areas was assessed by intraoperative sonography and confirmed by postoperative CT scan. Conclusions This teaching video demonstrates the theoretical and practical key points of LCALS with a special emphasis on preoperative planning, intraoperative registration and accuracy testing by laparoscopic methodology. In contrast to mere ultrasound-guided ablation of liver lesions, LCALS offers a more dimensional targeting and higher safety control. This is currently also in routine use to treat vanishing lesions and other difficult to target focal lesions within the liver.
Resumo:
A model of Drosophila circadian rhythm generation was developed to represent feedback loops based on transcriptional regulation of per, Clk (dclock), Pdp-1, and vri (vrille). The model postulates that histone acetylation kinetics make transcriptional activation a nonlinear function of [CLK]. Such a nonlinearity is essential to simulate robust circadian oscillations of transcription in our model and in previous models. Simulations suggest that two positive feedback loops involving Clk are not essential for oscillations, because oscillations of [PER] were preserved when Clk, vri, or Pdp-1 expression was fixed. However, eliminating positive feedback by fixing vri expression altered the oscillation period. Eliminating the negative feedback loop in which PER represses per expression abolished oscillations. Simulations of per or Clk null mutations, of per overexpression, and of vri, Clk, or Pdp-1 heterozygous null mutations altered model behavior in ways similar to experimental data. The model simulated a photic phase-response curve resembling experimental curves, and oscillations entrained to simulated light-dark cycles. Temperature compensation of oscillation period could be simulated if temperature elevation slowed PER nuclear entry or PER phosphorylation. The model makes experimental predictions, some of which could be tested in transgenic Drosophila.
Resumo:
We partially solve a long-standing problem in the proof theory of explicit mathematics or the proof theory in general. Namely, we give a lower bound of Feferman’s system T0 of explicit mathematics (but only when formulated on classical logic) with a concrete interpretat ion of the subsystem Σ12-AC+ (BI) of second order arithmetic inside T0. Whereas a lower bound proof in the sense of proof-theoretic reducibility or of ordinalanalysis was already given in 80s, the lower bound in the sense of interpretability we give here is new. We apply the new interpretation method developed by the author and Zumbrunnen (2015), which can be seen as the third kind of model construction method for classical theories, after Cohen’s forcing and Krivine’s classical realizability. It gives us an interpretation between classical theories, by composing interpretations between intuitionistic theories.
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model
Resumo:
A technique for systematic peptide variation by a combination of rational and evolutionary approaches is presented. The design scheme consists of five consecutive steps: (i) identification of a “seed peptide” with a desired activity, (ii) generation of variants selected from a physicochemical space around the seed peptide, (iii) synthesis and testing of this biased library, (iv) modeling of a quantitative sequence-activity relationship by an artificial neural network, and (v) de novo design by a computer-based evolutionary search in sequence space using the trained neural network as the fitness function. This strategy was successfully applied to the identification of novel peptides that fully prevent the positive chronotropic effect of anti-β1-adrenoreceptor autoantibodies from the serum of patients with dilated cardiomyopathy. The seed peptide, comprising 10 residues, was derived by epitope mapping from an extracellular loop of human β1-adrenoreceptor. A set of 90 peptides was synthesized and tested to provide training data for neural network development. De novo design revealed peptides with desired activities that do not match the seed peptide sequence. These results demonstrate that computer-based evolutionary searches can generate novel peptides with substantial biological activity.
Resumo:
Type II DNA topoisomerases actively reduce the fractions of knotted and catenated circular DNA below thermodynamic equilibrium values. To explain this surprising finding, we designed a model in which topoisomerases introduce a sharp bend in DNA. Because the enzymes have a specific orientation relative to the bend, they act like Maxwell's demon, providing unidirectional strand passage. Quantitative analysis of the model by computer simulations proved that it can explain much of the experimental data. The required sharp DNA bend was demonstrated by a greatly increased cyclization of short DNA fragments from topoisomerase binding and by direct visualization with electron microscopy.
Resumo:
This computer simulation is based on a model of the origin of life proposed by H. Kuhn and J. Waser, where the evolution of short molecular strands is assumed to take place in a distinct spatiotemporal structured environment. In their model, the prebiotic situation is strongly simplified to grasp essential features of the evolution of the genetic apparatus without attempts to trace the historic path. With the tool of computer implementation confining to principle aspects and focused on critical features of the model, a deeper understanding of the model's premises is achieved. Each generation consists of three steps: (i) construction of devices (entities exposed to selection) presently available; (ii) selection; and (iii) multiplication of the isolated strands (R oligomers) by complementary copying with occasional variation by copying mismatch. In the beginning, the devices are single strands with random sequences; later, increasingly complex aggregates of strands form devices such as a hairpin-assembler device which develop in favorable cases. A monomers interlink by binding to the hairpin-assembler device, and a translation machinery, called the hairpin-assembler-enzyme device, emerges, which translates the sequence of R1 and R2 monomers in the assembler strand to the sequence of A1 and A2 monomers in the A oligomer, working as an enzyme.
Resumo:
We describe a procedure for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (i) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (ii) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (iii) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (iv) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol (1 kcal = 4.18 kJ) of experimental values. The use of the calibrated QM and microsolvation QM/MM (molecular mechanics) models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa values of the reacting species.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.