987 resultados para simple loop


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using the LAMP method, a highly specific and sensitive detection system for genetically modified soybean (Roundup Ready) was designed. In this detection system, a set of four primers was designed by targeting the exogenous 35S epsps gene. Target DNA was amplified and visualized on agarose gel within 45 min under isothermal conditions at 65 degrees C. Without gel electrophoresis, the LAMP amplicon was visualized directly in the reaction tube by the addition of SYBR Green I for naked-eye inspection. The detection sensitivity of LAMP was 10-fold higher than the nested PCR established in our laboratory. Moreover, the LAMP method was much quicker, taking only 70 min, as compared with 300 min for nested PCR to complete the analysis of the GM soybean. Compared with traditional PCR approaches, the LAMP procedure is faster and more sensitive, and there is no need for a special PCR machine or electrophoresis equipment. Hence, this method can be a very useful tool for GMO detection and is particularly convenient for fast screening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

牦牛的起源与属级分类学地位至今仍然存在一定的争议.我们测定了家养牦牛和野生牦牛线粒体控制区(D-loop)序列,并以此构建牦牛和牛属、野牛属、水牛属以及非洲水牛属相关种的系统发育树.研究结果表明线粒体D-loop区与Cyt b基因序列在构建牛族的系统发育具有同样重要的价值.系统发育关系显示野牛属的灭绝种草原野牛与现存种美洲野牛先聚合为一单系群,然后再和牦牛形成一单系分支,表明牦牛与野牛属的草原野牛、美洲野牛亲缘关系最近,具有最近的共同祖先,而与牛属的其它亚洲物种亲缘关系较远.因此,本研究不支持将牦牛独立为牦牛属--Poephagus,牛属与野牛属在分类上也应合并为一个属.基于上述研究结果和化石证据,我们进一步对牦牛起源的历史背景进行了讨论,认为牦牛与野牛属的分化是由于第四纪气候变化在欧亚大陆发生的,野牛通过白令陆桥进入北美;冰期结束后,由于欧亚大陆其它地区温度升高,牦牛只能局限分布在较为寒冷的青藏高原;而野牛属在北美先后分化为草原野牛和美洲野牛,前者可能是后者的直接祖先.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although respiration of organisms and biomass as well as fossil fuel burning industrial production are identified as the major sources, the CO2 flux is still unclear due to the lack of proper measurements. A mass-balance approach that exploits differences in the carbon isotopic signature (delta(13)C) of CO2 Sources and sinks was introduced and may provide a means of reducing uncertainties in the atmospheric budget. delta(13)C measurements of atmospheric CO2 yielded an average of - 10.3 parts per thousand relative to the Peedee Belemnite standard; soil and plants had a narrow range from -25.09 parts per thousand to -26.51 parts per thousand and averaged at -25.80 parts per thousand. Based on the fact of steady fractionation and enrichment during respiration of mitochondria, we obtained the emission Of CO2 of 35.451 mol m(-2) a(-1) and CO2 flux of 0.2149 mu mol m(-2) s(-)1. The positive CO2 flux indicated the Haibei Alpine Meadow Ecosystem a source rather than a sink. The mass-balance model can be applied for other ecosystem even global carbon cycles because it neglects the complicated process of carbon metabolism, however just focuses on stable carbon isotopic compositions in any of compartments of carbon sources and sinks. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

提出了一种多回路测控系统的设计方案。该方案仅使用一个DSP(数字信号处理器)及一个多通道集成的D/A转换器件MAX5307,不仅同时保证了多个测控回路的实时性及控制精度,而且实现简单,成本低廉。文中结合实际系统,给出了其具体的硬件和软件实现。该方法具有广泛的适用性,对类似系统的设计具有参考价值。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The function of seismic data in prospecting and exploring oil and gas has exceeded ascertaining structural configuration early. In order to determine the advantageous target area more exactly, we need exactly image the subsurface media. So prestack migration imaging especially prestack depth migration has been used increasingly widely. Currently, seismic migration imaging methods are mainly based on primary energy and most of migration methods use one-way wave equation. Multiple will mask primary and sometimes will be regarded as primary and interferes with the imaging of primary, so multiple elimination is still a very important research subject. At present there are three different wavefield prediction and subtraction methods: wavefield extrapolation; feedback loop; and inverse-scattering series. I mainly do research on feedback loop method in this paper. Feedback loop method includs prediction and subtraction.Currently this method has some problems as follows. Firstly, feedback loop method requires the seismic data used to predict multiple is full wavefield data, but usually the original seismic data don’t meet this assumption, so seismic data must be regularized. Secondly, Multiple predicted through feedback loop method usually can’t match the real multiple in seismic data and they are different in amplitude, phase and arrrival time. So we need match the predicted multiple and that in seismic data through estimating filtering factors and subtract multiple from seismic data. It is the key for multiple elimination how to select a correct matching filtering method. There are many matching filtering methods and I put emphasis on Least-square adaptive matching filtering and L1-norm minimizing adaptive matching filtering methods. Least-square adaptive matching filtering method is computationally very fast, but it has two assumptions: the signal has minimum energy and is orthogonal to the noise. When seismic data don’t meet the two assumptions, this method can’t get good matching results and then can’t attenuate multiple correctly. L1-norm adaptive matching filtering methods can avoid these two assumptions and then get good matching results, but this method is computationally a little slow. The results of my research are as follows: 1. Proposed a method that interpolates seismic traces based on F-K migration and demigration. The main advantage of this method is that it can interpolate seismic traces in any offsets. It shows this method is valid through a simple model. 2. Comparing different Least-square adaptive matching filtering methods. The results show that equipose multi-channel adaptive matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and two field data. 3. Proposed equipose multi-channel L1-norm adaptive matching filtering method. Because L1-norm is robust to large amplitude differences, there are no assumption on the signal has minimum energy and orthogonality, this method can get better results of multiple elimination. 4. Research on multiple elimination in inverse data space. The method is a new multiple elimination method and it is different from those methods mentioned above.The advantages of this method is that it is simple in theory and no need for the adaptive subtraction and computationally very fast. The disadvantage of this method is that it is not stabilized in its solution. The results show that equipose multi-channel and equipose pesudo-multi-channel least-square matching filtering and equipose multi-channel and equipose pesudo-multi-channel L1-norm matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and many field data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and sensitive method for the determination of short and long-chain fatty acids using high-performance liquid chromatography with fluorimetric detection has been developed. The fatty acids were derivatized to their corresponding esters with 9-(2-hydroxyethyl)-carbazole (HEC) in acetonitrile at 60 degreesC with 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride as a coupling agent in the presence of 4-dimethylaminopyridine (DMAP). A mixture of esters of C-1-C-20 fatty acids was completely separated within 38 min in conjunction with a gradient elution on a reversed-phase C-18 column. The maximum fluorescence emission for the derivatized fatty acids is at 365 nm (lambda (ex) 335 nm). Studies on derivatization conditions indicate that fatty acids react proceeded rapidly and smoothly with HEC in the presence of EDC and DMAP in acetonitrile to give the corresponding sensitively fluorescent derivatives. The application of this method to the analysis of long chain fatty acids in plasma is also investigated. The LC separation shows good selectivity and reproducibility for fatty acids derivatives. The R.S.D. (n = 6) for each fatty acid derivative are <4%. The detection limits are at 45-68 fmol levels for C-14-C-20 fatty acids and even lower levels for

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of this paper we show that a new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute a satisfactory estimation of the optical flow field. The algorithm is well-suited to VLSI implementations. The sparse measurements provided by the technique can be used to compute qualitative properties of the flow for a number of different visual tsks. In particular, the second part of the paper shows how to combine our 1D correlation technique with a scheme for detecting expansion or rotation ([5]) in a simple algorithm which also suggests interesting biological implications. The algorithm provides a rough estimate of time-to-crash. It was tested on real image sequences. We show its performance and compare the results to previous approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide a theory of the three-dimensional interpretation of a class of line-drawings called p-images, which are interpreted by the human vision system as parallelepipeds ("boxes"). Despite their simplicity, p-images raise a number of interesting vision questions: *Why are p-images seen as three-dimensional objects? Why not just as flatimages? *What are the dimensions and pose of the perceived objects? *Why are some p-images interpreted as rectangular boxes, while others are seen as skewed, even though there is no obvious distinction between the images? *When p-images are rotated in three dimensions, why are the image-sequences perceived as distorting objects---even though structure-from-motion would predict that rigid objects would be seen? *Why are some three-dimensional parallelepipeds seen as radically different when viewed from different viewpoints? We show that these and related questions can be answered with the help of a single mathematical result and an associated perceptual principle. An interesting special case arises when there are right angles in the p-image. This case represents a singularity in the equations and is mystifying from the vision point of view. It would seem that (at least in this case) the vision system does not follow the ordinary rules of geometry but operates in accordance with other (and as yet unknown) principles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this work is to navigate through an office environmentsusing only visual information gathered from four cameras placed onboard a mobile robot. The method is insensitive to physical changes within the room it is inspecting, such as moving objects. Forward and rotational motion vision are used to find doors and rooms, and these can be used to build topological maps. The map is built without the use of odometry or trajectory integration. The long term goal of the project described here is for the robot to build simple maps of its environment and to localize itself within this framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic systems which undergo rapid motion can excite natural frequencies that lead to residual vibration at the end of motion. This work presents a method to shape force profiles that reduce excitation energy at the natural frequencies in order to reduce residual vibration for fast moves. Such profiles are developed using a ramped sinusoid function and its harmonics, choosing coefficients to reduce spectral energy at the natural frequencies of the system. To improve robustness with respect to parameter uncertainty, spectral energy is reduced for a range of frequencies surrounding the nominal natural frequency. An additional set of versine profiles are also constructed to permit motion at constant speed for velocity-limited systems. These shaped force profiles are incorporated into a simple closed-loop system with position and velocity feedback. The force input is doubly integrated to generate a shaped position reference for the controller to follow. This control scheme is evaluated on the MIT Cartesian Robot. The shaped inputs generate motions with minimum residual vibration when actuator saturation is avoided. Feedback control compensates for the effect of friction Using only a knowledge of the natural frequencies of the system to shape the force inputs, vibration can also be attenuated in modes which vibrate in directions other than the motion direction. When moving several axes, the use of shaped inputs allows minimum residual vibration even when the natural frequencies are dynamically changing by a limited amount.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple analog circuit designer has been implemented as a rule based system. The system can design voltage followers. Miller integrators, and bootstrap ramp generators from functional descriptions of what these circuits do. While the designer works in a simple domain where all components are ideal, it demonstrates the abilities of skilled designers. While the domain is electronics, the design ideas are useful in many other engineering domains, such as mechanical engineering, chemical engineering, and numerical programming. Most circuit design systems are given the circuit schematic and use arithmetic constraints to select component values. This circuit designer is different because it designs the schematic. The designer uses a unidirectional CONTROL relation to find the schematic. The circuit designs are built around this relation; it restricts the search space, assigns purposes to components and finds design bugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What are the characteristics of the process by which an intent is transformed into a plan and then a program? How is a program debugged? This paper analyzes these questions in the context of understanding simple turtle programs. To understand and debug a program, a description of its intent is required. For turtle programs, this is a model of the desired geometric picture. a picture language is provided for this purpose. Annotation is necessary for documenting the performance of a program in such a way that the system can examine the procedures behavior as well as consider hypothetical lines of development due to tentative debugging edits. A descriptive framework representing both causality and teleology is developed. To understand the relation between program and model, the plan must be known. The plan is a description of the methodology for accomplishing the model. Concepts are explicated for translating the global intent of a declarative model into the local imperative code of a program. Given the plan, model and program, the system can interpret the picture and recognize inconsistencies. The description of the discrepancies between the picture actually produced by the program and the intended scene is the input to a debugging system. Repair of the program is based on a combination of general debugging techniques and specific fixing knowledge associated with the geometric model primitives. In both the plan and repairing the bugs, the system exhibits an interesting style of analysis. It is capable of debugging itself and reformulating its analysis of a plan or bug in response to self-criticism. In this fashion, it can qualitatively reformulate its theory of the program or error to account for surprises or anomalies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multi-plate (NIP) mathematical model was proposed by frontal analysis to evaluate nonlinear chromatographic performance. One of its advantages is that the parameters may be easily calculated from experimental data. Moreover, there is a good correlation between it and the equilibrium-dispersive (E-D) or Thomas models. This shows that it can well accommodate both types of band broadening that is comprised of either diffusion-dominated processes or kinetic sorption processes. The MP model can well describe experimental breakthrough curves that were obtained from membrane affinity chromatography and column reversed-phase liquid chromatography. Furthermore, the coefficients of mass transfer may be calculated according to the relationship between the MP model and the E-D or Thomas models. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims Surgery for infective endocarditis (IE) is associated with high mortality. Our objectives were to describe the experience with surgical treatment for IE in Spain, and to identify predictors of in-hospital mortality. Methods Prospective cohort of 1000 consecutive patients with IE. Data were collected in 26 Spanish hospitals. Results Surgery was performed in 437 patients (43.7%). Patients treated with surgery were younger and predominantly male. They presented fewer comorbid conditions and more often had negative blood cultures and heart failure. In-hospital mortality after surgery was lower than in the medical therapy group (24.3 vs 30.7%, p = 0.02). In patients treated with surgery, endocarditis involved a native valve in 267 patients (61.1%), a prosthetic valve in 122 (27.9%), and a pacemaker lead with no clear further valve involvement in 48 (11.0%). The most common aetiologies were Staphylococcus (186, 42.6%), Streptococcus (97, 22.2%), and Enterococcus (49, 11.2%). The main indications for surgery were heart failure and severe valve regurgitation. A risk score for in-hospital mortality was developed using 7 prognostic variables with a similar predictive value (OR between 1.7 and 2.3): PALSUSE: prosthetic valve, age ≥ 70, large intracardiac destruction, Staphylococcus spp, urgent surgery, sex [female], EuroSCORE ≥ 10. In-hospital mortality ranged from 0% in patients with a PALSUSE score of 0 to 45.4% in patients with PALSUSE score > 3. Conclusions The prognosis of IE surgery is highly variable. The PALSUSE score could help to identify patients with higher in-hospital mortality.