878 resultados para sets of words
Resumo:
River training walls have been built at scores of locations along the NSW coast and their impacts on shoreline change are still not fully understood. In this study, the Brunswick River entrance and adjacent beaches are selected for examination of the impact of the construction of major training walls. Thirteen sets of aerial photographs taken between 1947 and 1994 are used in a CIS approach to accurately determine tire shoreline Position, beach contours and sand volumes, and their changes in both time and space, and then to assess the contribution of both tire structures and natural hydrodynamic conditions to large scale (years-decades and kilometres) beach changes. The impact of the training walls can be divided into four stages: natural conditions prior to their construction (pre 1959), major downdrift erosion and updrift accretion during and. following the construction of the walls in 1959 similar to 1962 and 1966. diminishing impact of the walls between 1966 and 1987, and finally no apparent impact between 1987 similar to 1994. The impact extends horizontally about 8 km updrift and 17 km downdrift, and temporally up to 25 years..
Resumo:
Background and Aims It is an enduring question as to the mechanisms leading to the high diversity and the processes producing endemics with unusual morphologies in the Himalayan alpine region. In the present study, the phylogenetic relationships and origins of three such endemic genera were analysed, Dolomiaea, Diplazoptilon and Xanthopappus, all in the tribe Cardueae of Asteraceae.Methods The nuclear rDNA internal transcribed spacer (ITS) and plastid trnL-F and psbA-trnH regions of these three genera were sequenced. The same regions for other related genera in Cardueae were also sequenced or downloaded from GenBank. Phylogenetic trees were constructed from individual and combined data sets of the three types of sequences using maximum parsimony, maximum likelihood and Bayesian analyses.Key Results The phylogenetic tree obtained allowed earlier hypotheses concerning the relationships of these three endemic genera based on gross morphology to be rejected. Frolovia and Saussurea costus were deeply nested within Dolomiaea, and the strong statistical support for the Dolomiaea-Frolovia clade suggested that circumscription of Dolomiaea should be more broadly redefined. Diplazoptilon was resolved as sister to Himalaiella, and these two together are sister to Lipschitziella. The clade comprising these three genera is sister to Jurinea, and together these four genera are sister to the Dolomiaea-Frolovia clade. Xanthopappus, previously hypothesized to be closely related to Carduus, was found to be nested within a well-supported but not fully resolved Onopordum group with Alfredia, Ancathia, Lamyropappus, Olgaea, Synurus and Syreitschikovia, rather than the Cardinis group. The crude dating based on ITS sequence divergence revealed that the divergence time of Dolomiaea-Frolovia from its sister group probably occurred 13.6-12.2 million years ago (Ma), and the divergence times of the other two genera, Xanthopappus and Diplazoptilon, from their close relatives around 5.7-4.7 Ma and 2.0-1.6 Ma, respectively.Conclusions The findings provide an improved understanding of the intergeneric relationships in Cardueae. The crude calibration of lineages indicates that the uplifts of the Qiinghai -Tibetan Plateau since the Miocene might have served as a continuous stimulus for the production of these morphologically aberrant endemic elements of the Himalayan flora.
Resumo:
A study was carried out to examine the effect of dynamic photosynthetically active photon flux density (PPFD) on photoinhibition and energy use in three herbaceous species, prostrate Saussurea superba, erect-leaved S. katochaete, and half-erect-leaved Gentiana straminea, from the Qinghai-Tibet Plateau. Chlorophyll fluorescence response was measured under each of three sets of high-low PPFD combinations: 1700-0, 1400-300, and 1200-500 mu mol m(-2) s(-1), illuminating in four dynamic frequencies: 1, 5, 15, and 60 cycles per 2 h. The total light exposure time was 2h and the integrated PPFD was the same in all treatments. The highest frequency of PPFD fluctuation resulted in the lowest photochemical activity, the highest level of non-photochemical quenching, and the greatest decrease of F-v/F-m (maximal photochemical efficiency of PSII). The 5 and 15 cycles per 2h treatments resulted in higher photochemical activity than the 1 cycle per 2h treatment. The 1700-0 PPFD combination led to the lowest photochemical activity and more serious photoinhibition in all species. S. superba usually exhibited the highest photochemical activity and CO2 uptake rate, the lowest reduction of F-v/F-m,F- and the smallest fraction of energy in thermal dissipation. With similar fractions of thermal dissipation, S. katochaete had relatively less photoinhibition than G. straminea owing to effective F-o quenching. The results suggest that high frequency of fluctuating PPFD generally results in photoinhibition, which is more serious under periods of irradiation with high light intensity. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
It is a basic work to ascertain the parameters of rock mass for evaluation about stability of the engineering. Anisotropism、inhomogeneity and discontinuity characters of the rock mass arise from the existing of the structural plane. Subjected to water、weathering effect、off-loading, mechanical characters of the rock mass are greatly different from rock itself, Determining mechanical parameters of the rock mass becomes so difficult because of structure effect、dimension effect、rheological character, ‘Can’t give a proper parameter’ becomes one of big problems for theoretic analysis and numerical simulation. With the increment of project scale, appraising the project rock mass and ascertaining the parameters of rock mass becomes more and more important and strict. Consequently, researching the parameters of rock mass has important theoretical significance and actual meaning. The Jin-ping hydroelectric station is the first highest hyperbolic arch dam in the world under construction, the height of the dam is about 305m, it is the biggest hydroelectric station at lower reaches of Yalong river. The length of underground factory building is 204.52m, the total height of it is 68.83m, the maximum of span clearance is 28.90m. Large-scale excavation in the underground factory of Jin-ping hydroelectric station has brought many kinds of destructive phenomenon, such as relaxation、spilling, providing a precious chance for study of unloading parameter about rock mass. As we all know, Southwest is the most important hydroelectric power base in China, the construction of the hydroelectric station mostly concentrate at high mountain and gorge area, basically and importantly, we must be familiar with the physical and mechanical character of the rock mass to guarantee to exploit safely、efficiently、quickly, in other words, we must understand the strength and deformation character of the rock mass. Based on enough fieldwork of geological investigation, we study the parameter of unloading rock mass on condition that we obtain abundant information, which is not only important for the construction of Jin-ping hydroelectric station, but also for the construction of other big hydroelectric station similar with Jin-ping. This paper adopt geological analysis、test data analysis、experience analysis、theory research and Artificial Neural Networks (ANN) brainpower analysis to evaluate the mechanical parameter, the major production is as follows: (1)Through the excavation of upper 5-layer of the underground powerhouse and the statistical classification of the main joints fractures exposed, We believe that there are three sets of joints, the first group is lay fracture, the second group and the fourth group are steep fracture. These provide a strong foundation for the following calculation of and analysis; (2)According to the in-situ measurement about sound wave velocity、displacement and anchor stress, we analyses the effects of rock unloading effect,the results show a obvious time-related character and localization features of rock deformation. We determine the depth of excavation unloading of underground factory wall based on this. Determining the rock mass parameters according to the measurement about sound wave velocity with characters of low- disturbing、dynamic on the spot, the result can really reflect the original state, this chapter approximately the mechanical parameters about rock mass at each unloading area; (3)Based on Hoek-Brown experienced formula with geological strength index GSI and RMR method to evaluate the mechanical parameters of different degree weathering and unloading rock mass about underground factory, Both of evaluation result are more satisfied; (4)From the perspective of far-field stress, based on the stress field distribution ideas of two-crack at any load conditions proposed by Fazil Erdogan (1962),using the strain energy density factor criterion (S criterion) proposed by Xue changming(1972),we establish the corresponding relationship between far-field stress and crack tip stress field, derive the integrated intensity criterion formula under the conditions of pure tensile stress among two line coplanar intermittent jointed rock,and establish the corresponding intensity criterion for the exploratory attempt; (5)With artificial neural network, the paper focuses on the mechanical parameters of rock mass that we concerned about and the whole process of prediction of deformation parameters, discusses the prospect of applying in assessment about the parameters of rock mass,and rely on the catalog information of underground powerhouse of Jinping I Hydropower Station, identifying the rock mechanics parameters intellectually,discusses the sample selection, network design, values of basic parameters and error analysis comprehensively. There is a certain significance for us to set up a set of parameters evaluation system,which is in construction of large-scale hydropower among a group of marble mass.
Resumo:
We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.
Resumo:
A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations. The procedure can automatically recognize the tractability of the inference rules underlying congruence closure. The recognition of tractability for that particular rule set constitutes mechanical verification of a theorem originally proved independently by Kozen and Shostak. The procedure is algorithmic, rather than heuristic, and the class of automatically recognizable tractable rule sets can be precisely characterized. A series of examples of rule sets whose tractability is non-trivial, yet machine recognizable, is also given. The technical framework developed here is viewed as a first step toward a general theory of tractable inference relations.
Resumo:
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.
Resumo:
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C1VC2 syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T). The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task
Resumo:
Does knowledge of language consist of symbolic rules? How do children learn and use their linguistic knowledge? To elucidate these questions, we present a computational model that acquires phonological knowledge from a corpus of common English nouns and verbs. In our model the phonological knowledge is encapsulated as boolean constraints operating on classical linguistic representations of speech sounds in term of distinctive features. The learning algorithm compiles a corpus of words into increasingly sophisticated constraints. The algorithm is incremental, greedy, and fast. It yields one-shot learning of phonological constraints from a few examples. Our system exhibits behavior similar to that of young children learning phonological knowledge. As a bonus the constraints can be interpreted as classical linguistic rules. The computational model can be implemented by a surprisingly simple hardware mechanism. Our mechanism also sheds light on a fundamental AI question: How are signals related to symbols?
Resumo:
The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector--a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets. and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.
Resumo:
In this thesis, two different sets of experiments are described. The first is an exploration of the microscopic superfluidity of dilute gaseous Bose- Einstein condensates. The second set of experiments were performed using transported condensates in a new BEC apparatus. Superfluidity was probed by moving impurities through a trapped condensate. The impurities were created using an optical Raman transition, which transferred a small fraction of the atoms into an untrapped hyperfine state. A dramatic reduction in the collisions between the moving impurities and the condensate was observed when the velocity of the impurities was close to the speed of sound of the condensate. This reduction was attributed to the superfluid properties of a BEC. In addition, we observed an increase in the collisional density as the number of impurity atoms increased. This enhancement is an indication of bosonic stimulation by the occupied final states. This stimulation was observed both at small and large velocities relative to the speed of sound. A theoretical calculation of the effect of finite temperature indicated that collision rate should be enhanced at small velocities due to thermal excitations. However, in the current experiments we were insensitive to this effect. Finally, the factor of two between the collisional rate between indistinguishable and distinguishable atoms was confirmed. A new BEC apparatus that can transport condensates using optical tweezers was constructed. Condensates containing 10-15 million sodium atoms were produced in 20 s using conventional BEC production techniques. These condensates were then transferred into an optical trap that was translated from the âproduction chamber’ into a separate vacuum chamber: the âscience chamber’. Typically, we transferred 2-3 million condensed atoms in less than 2 s. This transport technique avoids optical and mechanical constrainsts of conventional condensate experiments and allows for the possibility of novel experiments. In the first experiments using transported BEC, we loaded condensed atoms from the optical tweezers into both macroscopic and miniaturized magnetic traps. Using microfabricated wires on a silicon chip, we observed excitation-less propagation of a BEC in a magnetic waveguide. The condensates fragmented when brought very close to the wire surface indicating that imperfections in the fabrication process might limit future experiments. Finally, we generated a continuous BEC source by periodically replenishing a condensate held in an optical reservoir trap using fresh condensates delivered using optical tweezers. More than a million condensed atoms were always present in the continuous source, raising the possibility of realizing a truly continuous atom lase.
Resumo:
An investigation is made into the problem of constructing a model of the appearance to an optical input device of scenes consisting of plane-faced geometric solids. The goal is to study algorithms which find the real straight edges in the scenes, taking into account smooth variations in intensity over faces of the solids, blurring of edges and noise. A general mathematical analysis is made of optimal methods for identifying the edge lines in figures, given a raster of intensities covering the entire field of view. There is given in addition a suboptimal statistical decision procedure, based on the model, for the identification of a line within a narrow band on the field of view given an array of intensities from within the band. A computer program has been written and extensively tested which implements this procedure and extracts lines from real scenes. Other programs were written which judge the completeness of extracted sets of lines, and propose and test for additional lines which had escaped initial detection. The performance of these programs is discussed in relation to the theory derived from the model, and with regard to their use of global information in detecting and proposing lines.
Resumo:
How can one represent the meaning of English sentences in a formal logical notation such that the translation of English into this logical form is simple and general? This report answers this question for a particular kind of meaning, namely quantifier scope, and for a particular part of the translation, namely the syntactic influence on the translation. Rules are presented which predict, for example, that the sentence: Everyone in this room speaks at least two languages. has the quantifier scope AE in standard predicate calculus, while the sentence: At lease two languages are spoken by everyone in this room. has the quantifier scope EA. Three different logical forms are presented, and their translation rules are examined. One of the logical forms is predicate calculus. The translation rules for it were developed by Robert May (May 19 77). The other two logical forms are Skolem form and a simple computer programming language. The translation rules for these two logical forms are new. All three sets of translation rules are shown to be general, in the sense that the same rules express the constraints that syntax imposes on certain other linguistic phenomena. For example, the rules that constrain the translation into Skolem form are shown to constrain definite np anaphora as well. A large body of carefully collected data is presented, and used to assess the empirical accuracy of each of the theories. None of the three theories is vastly superior to the others. However, the report concludes by suggesting that a combination of the two newer theories would have the greatest generality and the highest empirical accuracy.
Resumo:
Lee, M. H., Lacey, N. J. (2003). The Influence of Epistemology on the Design of Artificial Agents. Minds and Machines, 13 (3), 367-395
Resumo:
Srinivasan, A., King, R. D. and Bain, M.E. (2003) An Empirical Study of the Use of Relevance Information in Inductive Logic Programming. Journal of Machine Learning Research. 4(Jul):369-383