964 resultados para Inputs sensoriels
Resumo:
Suspension aquaculture of filter-feeding bivalves has been developing rapidly in coastal waters in the world, especially in China. Previous studies have demonstrated that dense populations of filter-feeding bivalves in shallow water can produce a large amount of faeces and pseudofaeces (biodeposits) that may lead to negative impacts on the benthic environment. To determine whether the deposit feeder Stichopus (Apostichopus) japonicus Selenka can feed on bivalve biodeposits and whether the sea cucumber can be co-cultured with bivalves in suspended lantern nets, three experiments were conducted, two in tanks in the laboratory and one in the field. In a 3-month flow-through experiment, results showed that sea cucumbers grew well with specific growth rate (SGR) reaching 1.38% d(-1), when cultured in the bottom of tanks (10 m(3) water volume) where scallops were cultured in suspension in lantern nets. Moreover, results of another laboratory experiment demonstrated that sea cucumbers could survive well on bivalve biodeposits, with a feeding rate of 1.82 +/- 0.13 g dry biodeposits ind(-1) d(-1), absorption efficiency of organic matter in biodeposits of 17.2% +/- 5.5%, and average SGR of 1.60% d(-1). Our longer-term field experiments in two coastal bays (Sishili Bay and Jiaozhou Bay, northern China) showed that S. japonicus co-cultured with bivalves also grew well at growth rates (0.09-0.31 g wet weight ind(-1) d(-1)) depending on individual size. The results suggest that bivalve lantern nets can provide a good habitat for sea cucumbers; and the co-culture of bivalve molluscs with sea cucumbers may provide an additional valuable crop with no additional inputs. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In coastal ecosystems, suspension-cultured bivalve filter feeders may exert a strong impact on phytoplankton and other suspended particulate matter and induce strong pelagic-benthic coupling via intense filtering and biodeposition. We designed an in situ method to determine spatial variations in the filtering-biodeposition process by intensively suspension-cultured scallops Chlamys farreri in summer in a eutrophic bay (Sishili Bay, China), using cylindrical biodeposition traps directly suspended from longlines under ambient environmental conditions. Results showed that bivalve filtering-biodeposition could substantially enhance the deposition of total suspended material and the flux of C, N and P to the benthos, indicating that the suspended filter feeders could strongly enhance pelagic-benthic coupling and exert basin-scale impacts in the Sishili Bay ecosystem. The biodeposition rates of 1-yr-old scallops varied markedly among culture sites (33.8 to 133.0 mg dry material ind.(-1) d(-1)), and were positively correlated with seston concentrations. Mean C, N and P biodeposition rates were 4.00, 0.51, 0.11 mg ind.-1 d-1, respectively. The biodeposition rates of 2-yr-old scallops were almost double these values. Sedimentation rates at scallop culture sites averaged 2.46 times that at the reference site. Theoretically, the total water column of the bay could be filtered by the cultured scallops in 12 d, with daily seston removal amounting to 64%. This study indicated that filtering-biodeposition by suspension-cultured scallops could exert long-lasting top-down control on phytoplankton biomass and other suspended material in the Sishili Bay ecosystem. In coastal waters subject to anthropogenic N and P inputs, suspended bivalve aquaculture could be advantageous, not only economically, but also ecologically, by functioning as a biofilter and potentially mitigating eutrophication pressures. Compared with distribution-restricted wild bivalves, suspension-cultured bivalves in deeper coastal bays may be more efficient in processing seston on a basin scale.
Resumo:
Phosphorus is a key element and plays an important role in global biogeochemical cycles. The evolution of sedimentary environment is also influenced by phosphorus concentrations and fractions as well as phosphate sorption characteristics of the marine sediments. The geochemical characteristics of phosphorus and their environmental records were presented in Jiaozhou Bay sediments. Profiles of different forms of phosphorus were measured as well as the roles and vertical distributions of phosphorus forms in response to sedimentary environment changes were investigated. The results showed that inorganic phosphorus ( IP) was the major fraction of total phosphorus ( TP); phosphorus which is bound to calcium, iron and occluded phosphorus, as well as the exchangeable phosphorus were the main forms of IP, especially calcium-phosphorus, including detrital carbonate-bound phosphorus ( Det - P) and authigenic apatite-bound phosphorus ( ACa - P), are the uppermost constituent of IP in Jiaozhou Bay sediments. Moreover, the lead-210 chronology technology was employed to estimate how much phosphorus was buried ultimately in sediments. And the research showed that the impacts of human activities have increased remarkably in recent years especially between the 1980s and 2000. According to research, the development of Jiaozhou Bay environment in the past hundred years can be divided into three stages; (I) before the 1980s characterized by the relatively low sedimentation rate, weak land-derived phosphorus inputs and low anthropogenic impacts; (2) from the 1980s to around 2000, accelerating in the 1990s, during which high sedimentation rates, high phosphorus abundance and burial fluxes due to the severe: human activities impacted on the whole environmental system; (3) after 2000, the period of the improvement of environment, the whole system has been improved including the decreasing sedimentation rates, concentration and the burial fluxes of phosphonas.
Resumo:
为提高自治水下机器人在垂直面跨越坡型、台阶型障碍的能力,提出了一种基于模糊控制的垂直面避碰规划方法。该方法以避碰声纳的输出作为输入,垂直面的深度调节量作为输出,直接建立了从障碍感知到避碰行为的映射。半物理仿真实验表明,该方法有效提高了某型自治水下机器人垂直面操纵性能,具有反应迅速、稳定性好、便于工程实现的优点。
Resumo:
由于发动机光谱分析监控数据中磨损微粒种类过多,如果将这些微粒信息直接作为神经网络的输入,则存在输入层神经元过多、网络结构复杂等诸多问题。本文将粗糙集引入到发动机故障诊断中来,利用粗糙集在属性约简方面的优势,删除冗余磨损微粒,提取出重要磨损微粒,并将其作为BP神经网络的输入,建立发动机故障诊断模型。该方法降低输入层的神经元个数,简化了网络结构,缩短网络训练时间,并且由于剔除了冗余磨损微粒,减少了由该部分微粒信息不准确而带来的误差,有效提高了故障诊断的精确度。最后通过算例分析验证了相关算法和诊断模型的准确性和有效性。
Resumo:
River is a major component of the global surface water and CO2 cycles. The chemistry of river waters reveals the nature of weathering on a basin-wide scale and helps us understand the exogenic cycles of elements in the continent-river-ocean system. In particular, geochemical investigation of large river gives important information on the biogeochemical cycles of the elements, chemical weathering rates, physical erosion rates and CO2 consumption during the weathering of the rocks within the drainage basin. Its importance has led to a number of detailed geochemical studies on some of the world's large and medium-size river systems. Flowing in the south of China, the Xijiang River is the second largest river in the China with respect to its discharge, after the Yangtze River. Its headwaters drain the YunGui Plateau, where altitude is approximately 2000 meters. Geologically, the carbonate rocks are widely spread in the river drainage basin, which covers an area of about 0.17xl06 km2, i.e., 39% of the whole drainage basin. This study focuses on the chemistry of the Xijiang river system and constitutes the first geochemical investigation into major and trace elements concentrations for both suspended and dissolved loads of this river and its main tributaries, and Sr isotopic composition of the dissolved load is also investigated, in order to determine both chemical weathering and mechanical erosion rates. As compared with the other large rivers of the world, the Xijiang River is characterized by higher major element concentration. The dissolved major cations average 1.17, 0.33, 0.15, and 0.04 mmol I"1 for Ca, Mg, Na, and K, respectively. The total cation concentrations (TZ+) in these rivers vary between 2.2 and 4.4 meq I'1. The high concentration of Ca and Mg, high (Ca+Mg)/(Na+K) ratio (7.9), enormous alkalinity and low dissolved SiO2/HCO3 ratio (0.05) in river waters reveal the importance of carbonate weathering and relatively weak silicate weathering over the river drainage basin. The major elements in river water, such as the alkalis and alkaline-earths, are of different origins: from rain water, silicate weathering, carbonate and evaporite weathering. A mixing model based on mass budget equation is used in this study, which allows the proportions of each element derived from the different source to be calculated. The carbonate weathering is the main source of these elements in the Xijiang drainage basin. The contribution of rainwater, especially for Na, reaches to approximately 50% in some tributaries. Dissolved elemental concentration of the river waters are corrected for rain inputs (mainly oceanic salts), the elemental concentrations derived from the different rock weathering are calculated. As a consequence, silicate, carbonate and total rock weathering rates, together with the consumption rates of atmospheric CO2 by weathering of each of these lithologies have been estimated. They provide specific chemical erosion rates varying between 5.1~17.8 t/km2/yr for silicate, 95.5~157.2 t/km2/yr for carbonate, and 100.6-169.1 t/km2/yr for total rock, respectively. CO2 consumptions by silicate and carbonate weathering approach 13><109and 270.5x10 mol/yr. Mechanical denudation rates deduced from the multi-year average of suspended load concentrations range from 92-874 t/km2/yr. The high denudation rates are mainly attributable to high relief and heavy rainfall, and acid rain is very frequent in the drainage basin, may exceed 50% and the pH value of rainwater may be <4.0, result from SO2 pollution in the atmosphere, results in the dissolution of carbonates and aluminosilicates and hence accelerates the chemical erosion rate. The compositions of minerals and elements of suspended particulate matter are also investigated. The most soluble elements (e.g. Ca, Na, Sr, Mg) are strongly depleted in the suspended phase with respect to upper continent crust, which reflects the high intensity of rock weathering in the drainage basin. Some elements (e.g. Pb, Cu, Co, Cr) show positive anomalies, Pb/Th ratios in suspended matter approach 7 times (Liu Jiang) to 10 times (Nanpan Jiang) the crustal value. The enrichment of these elements in suspended matter reflects the intensity both of anthropogenic pollution and adsorption processes onto particles. The contents of the soluble fraction of rare earth elements (REE) in the river are low, and REE mainly reside in particulate phase. In dissolved phase, the PAAS-normalized distribution patterns show significant HREE enrichment with (La/Yb) SN=0.26~0.94 and Ce depletion with (Ce/Ce*) SN=0.31-0.98, and the most pronounced negative Ce anomalies occur in rivers of high pH. In the suspended phase, the rivers have LREE-enriched patterns relative to PAAS, with (La/Yb) SN=1 -00-1 .40. The results suggest that pH is a major factor controlling both the absolute abundances of REE in solution and the fractionation of REE of dissolved phase. Ce depletion in river waters with high pH values results probably from both preferential removal of Ce onto Fe-Mn oxide coating of particles and CeC^ sedimentation. This process is known to occur in the marine environment and may also occur in high pH rivers. Positive correlations are also observed between La/Yb ratio and DOC, HCO3", PO4", suggesting that colloids and (or) adsorption processes play an important role in the control of these elements.
Resumo:
The transportation and deposition of eolian materials of Chinese loess is correlated and effected by the monsoon from the mid-high latitude. Therefore study of the winter monsoon evolution can help us to understand the dynamic mechanism to climate changes in the east-Asian areas. The anisotropy of magnetic susceptibility (AMS) measurements have been carried out on the samples from the last 250ka wind -blown loess-paleosol sequences at Baicaoyuan and Luochuan. And the main conclusions are following:The magnetic foliation is almost horizontal of the two sections. AMS canthus be represented by an oblate ellipsoid with average K3 perpendicular to thebedding plane and Ki within the bedding plane. It has also shown that the ^-factor isless than 0.5 of the majority of samples. So the two sections are normal magneticfabric for sediments.The degree of anisotropy always shows a strong correlation with the foliationrather than with lineation, therefore the anisotropy is controlled by the foliation.Furthermore the foliation is nearly less than 1.02 and shows the typical wind-blownsediments anisotropy.The intensity of winter monsoon, grain size of the eolian inputs, the foliationand the degree of anisotropy are somewhat inter-related. Generally, the higherintensity of the winter monsoon will carry coarser-grained eolian material, therebyresulting in a larger foliation during deposition. Also the post-depositional compactioncontributes to the anisotropy.The AMS features between loess and paleosol are somewhat different. Wefound that the F, P values of paleosol are lower than that of its parent loess respectively. Moreover, the difference does also exists between the two sections. The anisotropy of Baicaoyuan is more significant than Luochuan section, which maybe related with the location and the intensity of the post-deposition reworks.5. We note that the declination of the long axis is NWW in Baicaoyuan section and the observed NWW direction of the winter monsoon winds based on AMS is consistent with the view that the winter monsoons prevail along the NW-SE direction. But at the Luochuan section, because of the strong affection of the post-deposition reworks, the direction of the long axis is nearly random in the foliation and hardly recognizes the paleowind direction since the last two interglacials.Correlation between the two loess-paleosol sequences implies that it is available in arid or semi-arid areas to take AMS to recognize the paleowind directions on the Loess Plateau.
Resumo:
Both multilayer perceptrons (MLP) and Generalized Radial Basis Functions (GRBF) have good approximation properties, theoretically and experimentally. Are they related? The main point of this paper is to show that for normalized inputs, multilayer perceptron networks are radial function networks (albeit with a non-standard radial function). This provides an interpretation of the weights w as centers t of the radial function network, and therefore as equivalent to templates. This insight may be useful for practical applications, including better initialization procedures for MLP. In the remainder of the paper, we discuss the relation between the radial functions that correspond to the sigmoid for normalized inputs and well-behaved radial basis functions, such as the Gaussian. In particular, we observe that the radial function associated with the sigmoid is an activation function that is good approximation to Gaussian basis functions for a range of values of the bias parameter. The implication is that a MLP network can always simulate a Gaussian GRBF network (with the same number of units but less parameters); the converse is true only for certain values of the bias parameter. Numerical experiments indicate that this constraint is not always satisfied in practice by MLP networks trained with backpropagation. Multiscale GRBF networks, on the other hand, can approximate MLP networks with a similar number of parameters.
Resumo:
We describe a software package for computing and manipulating the subdivision of a sphere by a collection of (not necessarily great) circles and for computing the boundary surface of the union of spheres. We present problems that arise in the implementation of the software and the solutions that we have found for them. At the core of the paper is a novel perturbation scheme to overcome degeneracies and precision problems in computing spherical arrangements while using floating point arithmetic. The scheme is relatively simple, it balances between the efficiency of computation and the magnitude of the perturbation, and it performs well in practice. In one O(n) time pass through the data, it perturbs the inputs necessary to insure no potential degeneracies and then passes the perturbed inputs on to the geometric algorithm. We report and discuss experimental results. Our package is a major component in a larger package aimed to support geometric queries on molecular models; it is currently employed by chemists working in "rational drug design." The spherical subdivisions are used to construct a geometric model of a molecule where each sphere represents an atom. We also give an overview of the molecular modeling package and detail additional features and implementation issues.
Resumo:
This thesis describes an investigation of retinal directional selectivity. We show intracellular (whole-cell patch) recordings in turtle retina which indicate that this computation occurs prior to the ganglion cell, and we describe a pre-ganglionic circuit model to account for this and other findings which places the non-linear spatio-temporal filter at individual, oriented amacrine cell dendrites. The key non-linearity is provided by interactions between excitatory and inhibitory synaptic inputs onto the dendrites, and their distal tips provide directionally selective excitatory outputs onto ganglion cells. Detailed simulations of putative cells support this model, given reasonable parameter constraints. The performance of the model also suggests that this computational substructure may be relevant within the dendritic trees of CNS neurons in general.
Resumo:
Control of machines that exhibit flexibility becomes important when designers attempt to push the state of the art with faster, lighter machines. Three steps are necessary for the control of a flexible planet. First, a good model of the plant must exist. Second, a good controller must be designed. Third, inputs to the controller must be constructed using knowledge of the system dynamic response. There is a great deal of literature pertaining to modeling and control but little dealing with the shaping of system inputs. Chapter 2 examines two input shaping techniques based on frequency domain analysis. The first involves the use of the first deriviate of a gaussian exponential as a driving function template. The second, acasual filtering, involves removal of energy from the driving functions at the resonant frequencies of the system. Chapter 3 presents a linear programming technique for generating vibration-reducing driving functions for systems. Chapter 4 extends the results of the previous chapter by developing a direct solution to the new class of driving functions. A detailed analysis of the new technique is presented from five different perspectives and several extensions are presented. Chapter 5 verifies the theories of the previous two chapters with hardware experiments. Because the new technique resembles common signal filtering, chapter 6 compares the new approach to eleven standard filters. The new technique will be shown to result in less residual vibrations, have better robustness to system parameter uncertainty, and require less computation than other currently used shaping techniques.
Resumo:
A fundamental problem in artificial intelligence is obtaining coherent behavior in rule-based problem solving systems. A good quantitative measure of coherence is time behavior; a system that never, in retrospect, applied a rule needlessly is certainly coherent; a system suffering from combinatorial blowup is certainly behaving incoherently. This report describes a rule-based problem solving system for automatically writing and improving numerical computer programs from specifications. The specifications are in terms of "constraints" among inputs and outputs. The system has solved program synthesis problems involving systems of equations, determining that methods of successive approximation converge, transforming recursion to iteration, and manipulating power series (using differing organizations, control structures, and argument-passing techniques).
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
We propose a new notion of cryptographic tamper evidence. A tamper-evident signature scheme provides an additional procedure Div which detects tampering: given two signatures, Div can determine whether one of them was generated by the forger. Surprisingly, this is possible even after the adversary has inconspicuously learned (exposed) some-or even all-the secrets in the system. In this case, it might be impossible to tell which signature is generated by the legitimate signer and which by the forger. But at least the fact of the tampering will be made evident. We define several variants of tamper-evidence, differing in their power to detect tampering. In all of these, we assume an equally powerful adversary: she adaptively controls all the inputs to the legitimate signer (i.e., all messages to be signed and their timing), and observes all his outputs; she can also adaptively expose all the secrets at arbitrary times. We provide tamper-evident schemes for all the variants and prove their optimality. Achieving the strongest tamper evidence turns out to be provably expensive. However, we define a somewhat weaker, but still practical, variant: α-synchronous tamper-evidence (α-te) and provide α-te schemes with logarithmic cost. Our α-te schemes use a combinatorial construction of α-separating sets, which might be of independent interest. We stress that our mechanisms are purely cryptographic: the tamper-detection algorithm Div is stateless and takes no inputs except the two signatures (in particular, it keeps no logs), we use no infrastructure (or other ways to conceal additional secrets), and we use no hardware properties (except those implied by the standard cryptographic assumptions, such as random number generators). Our constructions are based on arbitrary ordinary signature schemes and do not require random oracles.