976 resultados para False-medideira caterpillar


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the ground target detection, classification and sensor fusion problems in distributed fiber seismic sensor network. Compared with conventional piezoelectric seismic sensor used in UGS, fiber optic sensor has advantages of high sensitivity and resistance to electromagnetic disturbance. We have developed a fiber seismic sensor network for target detection and classification. However, ground target recognition based on seismic sensor is a very challenging problem because of the non-stationary characteristic of seismic signal and complicated real life application environment. To solve these difficulties, we study robust feature extraction and classification algorithms adapted to fiber sensor network. An united multi-feature (UMF) method is used. An adaptive threshold detection algorithm is proposed to minimize the false alarm rate. Three kinds of targets comprise personnel, wheeled vehicle and tracked vehicle are concerned in the system. The classification simulation result shows that the SVM classifier outperforms the GMM and BPNN. The sensor fusion method based on D-S evidence theory is discussed to fully utilize information of fiber sensor array and improve overall performance of the system. A field experiment is organized to test the performance of fiber sensor network and gather real signal of targets for classification testing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finding countermodels is an effective way of disproving false conjectures. In first-order predicate logic, model finding is an undecidable problem. But if a finite model exists, it can be found by exhaustive search. The finite model generation problem in the first-order logic can also be translated to the satisfiability problem in the propositional logic. But a direct translation may not be very efficient. This paper discusses how to take the symmetries into account so as to make the resulting problem easier. A static method for adding constraints is presented, which can be thought of as an approximation of the least number heuristic (LNH). Also described is a dynamic method, which asks a model searcher like SEM to generate a set of partial models, and then gives each partial model to a propositional prover. The two methods are analyzed, and compared with each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a special-purpose neural computing system for face identification. The system architecture and hardware implementation are introduced in detail. An algorithm based on biomimetic pattern recognition has been embedded. For the total 1200 tests for face identification, the false rejection rate is 3.7% and the false acceptance rate is 0.7%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

制革行业是轻工行业中仅次于造纸业的高耗水、重污染行业,作为劳动密集型行业,在解决大量人口就业问题的同时,也对所在地区环境造成了严重污染。目前我国制革行业每年排放废水8,000~12,000万吨,废水中含铬约3,500 t,SS为1.2×105 t,COD为1.8×105 t,BOD为7×104 t,对水体污染严重。 本研究在对厌氧酸化工艺进行研究、一级好氧处理段进行工艺比选研究的基础上,获得了匀质调节—SBBR—BAF的生物处理工艺,并依托该工艺进行了生物强化处理的研究,考察了菌剂的强化运行效果及其处理水回用的可行性。 研究表明,在进水COD>3,000 mg/L,厌氧酸化具有很好的抗冲击作用,保证了好氧工艺出水COD<200 mg/L;在进水COD<3,000 mg/L,可只通过好氧处理实现出水COD<200 mg/L。厌氧酸化停留时间选择不当,会导致厌氧出水硫化物浓度升高,严重影响好氧系统,会使好氧活性污泥因中毒而解絮。 研究表明,当进水COD为2,000~2,500 mg/L,NH4+-N为130~146 mg/L时,COD、NH4+-N去除率SBBR分别为93.8%~96.6%和14.5%~55.9%,SBR分别为88.8%~94.9%和13%~50.7%,表明SBBR优于SBR。同时,研究发现SBBR污泥增长率为0.05 kgVSS/kgCOD,仅为SBR0.57 kgVSS/kgCOD的8.8%。此外,研究发现SBBR在停止运行后经3个运行周期可回复原油能力,而SBR池经9个周期培养也不能恢复,说明SBBR恢复能力明显优于SBR。 研究表明,以匀质调节—SBBR—BAF为主的制革废水处理工艺,出水水质稳定,进水COD 801~2,834 mg/L、NH4+-N 87~203 mg/L,出水COD<80 mg/L、NH4+-N<10 mg/L,基本达到中水回用标准;操作简单灵活,没有污泥回流系统,污泥产率低,污泥处理费用低;工艺基本不需要添加化学药剂,既节约成本、又避免了二次污染;两级生物膜使得该工艺具有很强的耐冲击负荷能力,特别适合制革废水水质水量波动大的特点。 研究表明,高效菌对系统的启动具有一定的促进作用,强化系统生物膜6天可以成熟,对照系统生物膜9天可以成熟。同时高效菌能加速COD降解,缩短停留时间,强化系统6~8 h可使COD<200 mg/L,对照系统8~10 h可使COD<200 mg/L。长期运行表明,强化系统的SBBR在COD和NH4+-N的去除率都优于对照系统的SBBR。最终出水COD强化系统平均为53 mg/L、对照系统为74 mg/L。在模拟循环过程中,强化系统均有更高的稳定性。可实现8次理论循环,而对照系统只能实现4次理论循环。 研究表明,通过合理的工艺设计,可以实现猪皮制革废水达到《污水综合排放标准GB8976-1996》一级标准,同时满足工厂部分用水要求。通过添加高效微生物,可提高生物处理系统处理能力,使处理水能够满足工厂的多次回用。 As a labour-intensive industry, tanning has created large amount of working opportunities as well as caused severe contamination to environment. And it is one of the highest water-consuming and polluting industry, only second to manufacturing. At present time, Chinese leather industry emits wastewater about 80,000,000~120,000,000 t annually, which contains chromium about 3,500 t, SS 1.2×105 t, COD 1.8×105 t, BOD 7×104 t and ambient riverhead has been polluted greatly. Based on the research of anaerobic acidification and comparison of SBBR and SBR, biotreatment process (Homogenization—SBBR—BAF) had been established to amend the disadvantages of traditional sewage treatment such as too much sludge, high cost of advanced treatment and NH4+-N can not reach the emission standard. Research on the bioaugmentation was also been carried out. Researches showed, when COD of influent was beyond 3,000 mg/L, anaerobic acidification could resist strong impact, thus COD of effluent was less than 200 mg/L; when COD of influent was less than 3,000 mg/L, only throughout aerobic sewage treatment could COD of effluent beless than 200 mg/L. False residence tiome of anaerobic acidification would lead to the higher effluent concentration of sulfide and disintegration of aerobic activated sludge. Researches showed SBBR worked a better than SBR: when influent between 2,000 and 2,500 mg/L, NH4+-N between 130 mg/L and 146 mg/L, COD, NH4+-N removal rate of SBBR was 93.3%~96.6%, 14.5%~55.9% respectively while COD, NH4+-N removal rate of SBR was 88.8%~94.9%, 13%~50.7% respectively. Sludge growth rate of SBBR was 8.8% of that of 0.05 kgVSS/kgCOD. Besides, SBBR could recovered after 3 operating periods while SBR worked no better after 9 operating periods.Therefore, SBBR excelled SBR. Researches showed, effluent quantity of tannery wastewater treatment process (Homogenization—SBBR—BAF) was stable. When COD of influent was between 801 and 2,834 mg/L, NH4+-N was between 87 mg/L and 203 mg/L, COD of effluent was less than 80 mg/L, NH4+-N was less than 10 mg/L, which achieved the standard of reuse. This biotreatment was featured in low cost, easy and flexible management, less sludge, no inverse sludge system. Besides, this technique required no chemical, which could lower the cost and avoid secondary pollution. Great resistant of impact due to two membranes and was suitable for tannery wastewater which was featured by fluctuation of influent quality and quantity. Researches showed effective microorganisms promotes the startup of the process.Biofilm in the bioaugmentation process matured with 6 days while biofilm in normal process matured with 9 days. Effective microorganisms could accelerate the degradation of COD and shorten the residence time. Aggrandizement system could make COD<200 mg/L with 6 to8 hours while cntrolling system could make COD<200 mg/L with 8 to 10 hours. Long-term operating shows that SBBR in the bioaugmentation system worked better than the normal system in the treatment of COD and NH4+-N. The average COC of effluent in bioaugmentation system was 53 mg/L, normal system was 74 mg/L. In the simulative circulation process,aggrandizement process, which could fulfill 8 times theoretical circulation, works more stably than controlling process which could only fulfill 4 times theoretical circulation. Researches showed that reasonable design could make the wastewater meet the first grade of discharging standard of National Integrated Wastewater Discharge Standard (GB8976-1996), and partially meet the demand of water using of the factory. Adding effective microorganisms could enhance the biotreatment and make the effluents reuse many times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since protein phosphorylation is a dominant mechanism of information transfer in cells, there is a great need for methods capable of accurately elucidating sites of phosphorylation. In recent years mass spectrometry has become an increasingly viable alternative to more traditional methods of phosphorylation analysis. The present study used immobilized metal affinity chromatography (IMAC coupled with a linear ion trap mass spectrometer to analyze phosphorylated proteins in mouse liver. A total of 26 peptide sequences defining 26 sites of phosphorylation were determined. Although this number of identified phosphoproteins is not large, the approach is still of interest because a series of conservative criteria were adopted in data analysis. We note that, although the binding of non-phosphorylated peptides to the IMAC column was apparent, the improvements in high-speed scanning and quality of MS/MS spectra provided by the linear ion trap contributed to the phosphoprotein identification. Further analysis demonstrated that MS/MS/MS analysis was necessary to exclude the false-positive matches resulting from the MS/MS experiments, especially for multiphosphorylated peptides. The use of the linear ion trap considerably enabled exploitation of nanoflow-HPLC/MS/MS, and in addition MS/MS/MS has great potential in phosphoproteome research of relatively complex samples. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mammographic mass detection is an important task for the early diagnosis of breast cancer. However, it is difficult to distinguish masses from normal regions because of their abundant morphological characteristics and ambiguous margins. To improve the mass detection performance, it is essential to effectively preprocess mammogram to preserve both the intensity distribution and morphological characteristics of regions. In this paper, morphological component analysis is first introduced to decompose a mammogram into a piecewise-smooth component and a texture component. The former is utilized in our detection scheme as it effectively suppresses both structural noises and effects of blood vessels. Then, we propose two novel concentric layer criteria to detect different types of suspicious regions in a mammogram. The combination is evaluated based on the Digital Database for Screening Mammography, where 100 malignant cases and 50 benign cases are utilized. The sensitivity of the proposed scheme is 99% in malignant, 88% in benign, and 95.3% in all types of cases. The results show that the proposed detection scheme achieves satisfactory detection performance and preferable compromises between sensitivity and false positive rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

从二维空间和三维空间2种角度研究误匹配滤波算法,提出在匹配前用于降低误匹配的灰度预处理算法和一种基于真实控制点的视差滤波算法。前者只针对2幅图像的重叠区域进行灰度均衡,可以减少计算量,后者在传统视差均值滤波的基础上可进一步提高误匹配的滤波效率。基于真实图像的实验结果表明,新算法可以有效滤除误匹配,提高三维重建精度,保证重建效果。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文提出了一种结构化环境下,基于立体视觉的机器人楼梯识别算法,并将算法该应到自主移动机器人上。该算法首先利用二维图像分析的方法搜索楼梯的疑似区域;进而利用立体视觉对各个疑似区域进行精确三维重建,结合三维信息重构楼梯平面,排除虚假疑似楼梯区域;最后判定机器人和楼梯的相对位姿关系,引导机器人爬楼梯。最终我们将该算法应用到了自主移动机器人上,通过在各种光照条件下的实验,进一步验证了该算法的准确性和快速性。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. For the sake of improving signal/noise ratio, we put forward four methods for eliminating random noise on the basis of detailed analysis of the technique for noise elimination using prediction filtering in f-x-y domain. The four methods are put forward for settling different problems, which are in the technique for noise elimination using prediction filtering in f-x-y domain. For weak noise and large filters, the response of the noise to the filter is little. For strong noise and short filters, the response of the noise to the filter is important. For the response of the noise, the predicting operators are inaccurate. The inaccurate operators result in incorrect results. So we put forward the method using prediction filtering by inversion in f-x-y domain. The method makes the assumption that the seismic signal comprises predictable proportion and unpredictable proportion. The transcendental information about predicting operator is introduced in the function. The method eliminates the response of the noise to filtering operator, and assures that the filtering operators are accurate. The filtering results are effectively improved by the method. When the dip of the stratum is very complex, we generally divide the data into rectangular patches in order to obtain the predicting operators using prediction filtering in f-x-y domain. These patches usually need to have significant overlap in order to get a good result. The overlap causes that the data is repeatedly used. It effectively increases the size of the data. The computational cost increases with the size of the data. The computational efficiency is depressed. The predicting operators, which are obtained by general prediction filtering in f-x-y domain, can not describe the change of the dip when the dip of the stratum is very complex. It causes that the filtering results are aliased. And each patch is an independent problem. In order to settle these problems, we put forward the method for eliminating noise using space varying prediction filtering in f-x-y domain. The predicting operators accordingly change with space varying in this method. Therefore it eliminates the false event in the result. The transcendental information about predicting operator is introduced into the function. To obtain the predicting operators of each patch is no longer independent problem, but related problem. Thus it avoids that the data is repeatedly used, and improves computational efficiency. The random noise that is eliminated by prediction filtering in f-x-y domain is Gaussian noise. The general method can't effectively eliminate non-Gaussian noise. The prediction filtering method using lp norm (especially p=l) can effectively eliminate non-Gaussian noise in f-x-y domain. The method is described in this paper. Considering the dip of stratum can be accurately obtained, we put forward the method for eliminating noise using prediction filtering under the restriction of the dip in f-x-y domain. The method can effectively increase computational efficiency and improve the result. Through calculating in the theoretic model and applying it to the field data, it is proved that the four methods in this paper can effectively solve these different problems in the general method. Their practicability is very better. And the effect is very obvious.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reading is an important human-specific skill obtained through extensive learning experience and is reliance on the ability to rapidly recognize single words. According to the behavioral studies, the most important stage of reading is the representation of “visual word form”, which is independent on surface visual features of the reading materials. The prelexical visual word form representation is characterized by the abstractive and highly effective and precise processing. Neuroimaging and neuropsychological studies have investigated the neural basis underlying the visual word form processing. On the basis of summary of the existing literature, the current thesis aimed to address three fundamental questions involving neural basis of word recognition. First, is there a dedicated neural network that is specialized for word recognition? Second, is the orthographic information represented in the putative word/character selective region (VWFA)? Third, what is the role of reading experience in the genesis of the VWFA, is experience a main driver to shape VWFA instead of evolutionary selectivity? Nineteen Chinese literate volunteers, 5 Chinese illiterates and 4 native English speakers participated in this study, and performed perceptual tasks during fMRI scanning. To address the first question, we compared the differential responses to three categories of visual objects, i.e., faces, line drawings of objects and Chinese characters, and defined the region of interesting (ROI) for the next experiment. To address the second question, Chinese character orthography was manipulated to reveal possible differential responses to real characters, false characters, radical combinations, and stroke combinations in the regions defined by the first experiment. To examine the role of reading experience in genesis of specialization for character, the responses for unfamiliar Chinese characters in Chinese illiterates and native English speakers were compared with that in the Chinese literates, and tracked the change in cortical activation after a short-term reading training in the illiterates. Data were analyzed in two dimensions. Both BOLD signal amplitude and spatial distribution pattern among multi-voxels were used to systematically investigate the responsiveness of the left fusiform gyrus to Chinese characters. Our results provide strong and clear evidence for the existence of functionally specialized regions in the human ventral occipital-temporal cortex. In the skilled readers a region specialized for written words could be consistently found in the lateral part of the left fusiform gyrus, line drawings in the median part and faces in the middle. Our results further show that spatial distribution analysis, a method that was not commonly used in neuroimaging of reading, appears to be a more effective measurement for category specialization for visual objects processing. Although we failed to provide evidence that VWFA processes orthographic information in terms of signal intensitiy, we do show that response pattern of real characters and radical collections in this area is different from that of false characters and random stroke combinations. Our last set of experiments suggests that the selective bias to reading material is clearly experience dependent. The response to unknown characters in both English speakers/readers and Chinese illiterates is fundamentally different from that of the skilled Chinese readers. The response pattern for unknown characters is more similar to that for line drawings rather as a weak version of character in skilled Chinese readers. Short-term training is not sufficient to produce VWFA bias even when tested with learned characters, rather the learned characters generated a overall upward shift of the activation of the left fusiform region. Formation of a dedicated region specialized for visual word/character might depend on long-term extensive reading experience, or there might be a critical period for reading acquisition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Children’s understanding of deontic rules and theory of mind (ToM) were the two research domains for children’s social cognition. It was significant for understanding children’s social cognition to combine the researches in the two domains. Children at 3, 5 and 7years were required to answer three questions according to the stories which happened in children’s familiar context. The three questions were designed to address the three problems:⑴Development of 3-7-Year-old children’s understanding about how the deontic rules were enacted or changed.⑵ Development of 3-7-Year-old children’s understanding about that the deontic rules and the actor’s mental states could impact on his behaviors.⑶ Development of 3-7-Year-old children’s capacity to integrate the deontic rules and mental state to evaluate the actor’s behavior. The results showed that: ① The 3-7-Year-old children had known that deontic rules were established by the authority’s speech act. But there were still some irrelevant factors which influenced the children’s judgments, such as the authority’s desire. ② The children gradually recognized the relationship between actors should do something and they will do the same thing. 3-year-old children could recognize such relationship in a way, but their predictions were usually influenced by some irrelevant factors. The children at 5 and 7 years old understood this relationship more steady. ③ In deontic context, more and more children predicted the actors’ behaviors according to the actors’ mental states as they grown up. The ratio that the 3-7-Year-old children predicted the actors’ behavior according to their false belief about the deontic rules was smaller in deontic context compared with the children’s performance in traditional false belief task. This maybe indicated that the deontic context influenced the children’s inference stronger than the physical context. ④ When they could get the actors’ desires and the deontic rules, all the children could predict the actors’ behaviors according to their desires, but not the deontic rules. It meant that all the children could understand that the actors’ desire mediated between the deontic rules and their behaviors. But when the actors wanted to transgress the deontic rules, all the children’s predications became less accurate. ⑤ When they assigned criticism, more and more children could discriminate different behaviors as a result of diverse mental states although they all transgressed the deontic rules. But the most part of children overweighed the deontic rules but overlooked the actors’ mental state about the deontic rules; their criticism to behaviors which transgressed the deontic rules just differ in quantity according to diverse mental states, that is: if the actors known the rules or want to transgress the rules, then punished more, and if the actors didn’t know the rules or transgress the rules accidentally, then punished a little.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Model-based object recognition commonly involves using a minimal set of matched model and image points to compute the pose of the model in image coordinates. Furthermore, recognition systems often rely on the "weak-perspective" imaging model in place of the perspective imaging model. This paper discusses computing the pose of a model from three corresponding points under weak-perspective projection. A new solution to the problem is proposed which, like previous solutins, involves solving a biquadratic equation. Here the biquadratic is motivate geometrically and its solutions, comprised of an actual and a false solution, are interpreted graphically. The final equations take a new form, which lead to a simple expression for the image position of any unmatched model point.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A unique matching is a stated objective of most computational theories of stereo vision. This report describes situations where humans perceive a small number of surfaces carried by non-unique matching of random dot patterns, although a unique solution exists and is observed unambiguously in the perception of isolated features. We find both cases where non-unique matchings compete and suppress each other and cases where they are all perceived as transparent surfaces. The circumstances under which each behavior occurs are discussed and a possible explanation is sketched. It appears that matching reduces many false targets to a few, but may still yield multiple solutions in some cases through a (possibly different) process of surface interpolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three-dimensional models which contain both geometry and texture have numerous applications such as urban planning, physical simulation, and virtual environments. A major focus of computer vision (and recently graphics) research is the automatic recovery of three-dimensional models from two-dimensional images. After many years of research this goal is yet to be achieved. Most practical modeling systems require substantial human input and unlike automatic systems are not scalable. This thesis presents a novel method for automatically recovering dense surface patches using large sets (1000's) of calibrated images taken from arbitrary positions within the scene. Physical instruments, such as Global Positioning System (GPS), inertial sensors, and inclinometers, are used to estimate the position and orientation of each image. Essentially, the problem is to find corresponding points in each of the images. Once a correspondence has been established, calculating its three-dimensional position is simply a matter of geometry. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem. The initial stage of the algorithm is completely local and scales linearly with the number of images. Subsequent stages are global in nature, exploit geometric constraints, and scale quadratically with the complexity of the underlying scene. We describe techniques for: 1) detecting and localizing surface patches; 2) refining camera calibration estimates and rejecting false positive surfels; and 3) grouping surface patches into surfaces and growing the surface along a two-dimensional manifold. We also discuss a method for producing high quality, textured three-dimensional models from these surfaces. Some of the most important characteristics of this approach are that it: 1) uses and refines noisy calibration estimates; 2) compensates for large variations in illumination; 3) tolerates significant soft occlusion (e.g. tree branches); and 4) associates, at a fundamental level, an estimated normal (i.e. no frontal-planar assumption) and texture with each surface patch.