965 resultados para Moretto, Alessandro Bonvicino, called Il, approximately 1498-1554 or 1555.
Resumo:
This report describes the proximate compositions (protein, moisture, fat, and ash) and major fatty acid profiles for raw and cooked samples of 40 southeastern finfish species. All samples (fillets) were cooked by a standard procedure in laminated plastic bags to an internal temperature of 70'C (lS8'F). Both summarized compositional data, with means and ranges for each species, and individual sample data including harvest dates and average lengths and weights are presented. When compared with raw samples, cooked samples exhibited an increase in protein content with an accompanying decrease in moisture content. Fat content either remained approximately the same or increased due to moisture loss during cooking. Our results are discussed in reference to compositional data previously published by others on some of the same species. Although additional data are needed to adequately describe the seasonal and geographic variations in the chemical compositions of many of these fish species, the results presented here should be useful to nutritionists, seafood marketers, and consumers.(PDF file contains 28 pages.)
Resumo:
363 p.
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
The initial probabilities of activated, dissociative chemisorption of methane and ethane on Pt(110)-(1 x 2) have been measured. The surface temperature was varied from 450 to 900 K with the reactant gas temperature constant at 300 K. Under these conditions, we probe the kinetics of dissociation via trapping-mediated (as opposed to 'direct') mechanism. It was found that the probabilities of dissociation of both methane and ethane were strong functions of the surface temperature with an apparent activation energies of 14.4 kcal/mol for methane and 2.8 kcal/mol for ethane, which implys that the methane and ethane molecules have fully accommodated to the surface temperature. Kinetic isotope effects were observed for both reactions, indicating that the C-H bond cleavage was involved in the rate-limiting step. A mechanistic model based on the trapping-mediated mechanism is used to explain the observed kinetic behavior. The activation energies for C-H bond dissociation of the thermally accommodated methane and ethane on the surface extracted from the model are 18.4 and 10.3 kcal/mol, respectively.
The studies of the catalytic decomposition of formic acid on the Ru(001) surface with thermal desorption mass spectrometry following the adsorption of DCOOH and HCOOH on the surface at 130 and 310 K are described. Formic acid (DCOOH) chemisorbs dissociatively on the surface via both the cleavage of its O-H bond to form a formate and a hydrogen adatom, and the cleavage of its C-O bond to form a carbon monoxide, a deuterium adatom and an hydroxyl (OH). The former is the predominant reaction. The rate of desorption of carbon dioxide is a direct measure of the kinetics of decomposition of the surface formate. It is characterized by a kinetic isotope effect, an increasingly narrow FWHM, and an upward shift in peak temperature with Ɵ_T, the coverage of the dissociatively adsorbed formic acid. The FWHM and the peak temperature change from 18 K and 326 K at Ɵ_T = 0.04 to 8 K and 395 K at Ɵ_T = 0.89. The increase in the apparent activation energy of the C-D bond cleavage is largely a result of self-poisoning by the formate, the presence of which on the surface alters the electronic properties of the surface such that the activation energy of the decomposition of formate is increased. The variation of the activation energy for carbon dioxide formation with Ɵ_T accounts for the observed sharp carbon dioxide peak. The coverage of surface formate can be adjusted over a relatively wide range so that the activation energy for C-D bond cleavage in the case of DCOOH can be adjusted to be below, approximately equal to, or well above the activation energy for the recombinative desorption of the deuterium adatoms. Accordingly, the desorption of deuterium was observed to be governed completely by the desorption kinetics of the deuterium adatoms at low Ɵ_T, jointly by the kinetics of deuterium desorption and C-D bond cleavage at intermediate Ɵ_T, and solely by the kinetics of C-D bond cleavage at high Ɵ_T. The overall branching ratio of the formate to carbon dioxide and carbon monoxide is approximately unity, regardless the initial coverage Ɵ_T, even though the activation energy for the production of carbon dioxide varies with Ɵ_T. The desorption of water, which implies C-O bond cleavage of the formate, appears at approximately the same temperature as that of carbon dioxide. These observations suggest that the cleavage of the C-D bond and that of the C-O bond of two surface formates are coupled, possibly via the formation of a short-lived surface complex that is the precursor to to the decomposition.
The measurement of steady-state rate is demonstrated here to be valuable in determining kinetics associated with short-lived, molecularly adsorbed precursor to further reactions on the surface, by determining the kinetic parameters of the molecular precursor of formaldehyde to its dissociation on the Pt(110)-(1 x 2) surface.
Overlayers of nitrogen adatoms on Ru(001) have been characterized both by thermal desorption mass spectrometry and low-energy electron diffraction, as well as chemically via the postadsorption and desorption of ammonia and carbon monoxide.
The nitrogen-adatom overlayer was prepared by decomposing ammonia thermally on the surface at a pressure of 2.8 x 10^(-6) Torr and a temperature of 480 K. The saturated overlayer prepared under these conditions has associated with it a (√247/10 x √247/10)R22.7° LEED pattern, has two peaks in its thermal desorption spectrum, and has a fractional surface coverage of 0.40. Annealing the overlayer to approximately 535 K results in a rather sharp (√3 x √3)R30° LEED pattern with an associated fractional surface coverage of one-third. Annealing the overlayer further to 620 K results in the disappearance of the low-temperature thermal desorption peak and the appearance of a rather fuzzy p(2x2) LEED pattern with an associated fractional surface coverage of approximately one-fourth. In the low coverage limit, the presence of the (√3 x √3)R30° N overlayer alters the surface in such a way that the binding energy of ammonia is increased by 20% relative to the clean surface, whereas that of carbon monoxide is reduced by 15%.
A general methodology for the indirect relative determination of the absolute fractional surface coverages has been developed and was utilized to determine the saturation fractional coverage of hydrogen on Ru(001). Formaldehyde was employed as a bridge to lead us from the known reference point of the saturation fractional coverage of carbon monoxide to unknown reference point of the fractional coverage of hydrogen on Ru(001), which is then used to determine accurately the saturation fractional coverage of hydrogen. We find that ƟSAT/H = 1.02 (±0.05), i.e., the surface stoichiometry is Ru : H = 1 : 1. The relative nature of the method, which cancels systematic errors, together with the utilization of a glass envelope around the mass spectrometer, which reduces spurious contributions in the thermal desorption spectra, results in high accuracy in the determination of absolute fractional coverages.
Resumo:
Color filters are key components in an optical engine projection display system. In this paper, a new admittance-matching method for designing and fabricating the high performance filters is described, in which the optimized layers are limited to the interfaces between the stack (each combination of quarter-wave-optical-thickness film layers is called a stack) and stack, or between stack and substrate, or between stack and incident medium. This method works well in designing filters containing multiple stacks such as UV-IR cut and broadband filters. The tolerance and angle sensitivity for the designed film stacks are analyzed. The thermal stability of the sample color filters was measured. A good result in optical performance and thermal stability was obtained through the new design approach. (c) 2006 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Color filters are key components in an optical engine projection display system. In this paper, a new admittance-matching method for designing and fabricating the high performance filters is described, in which the optimized layers are limited to the interfaces between the stack (each combination of quarter-wave-optical-thickness film layers is called a stack) and stack, or between stack and substrate, or between stack and incident medium. This method works well in designing filters containing multiple stacks such as UV-IR cut and broadband filters. The tolerance and angle sensitivity for the designed film stacks are analyzed. The thermal stability of the sample color filters was measured. A good result in optical performance and thermal stability was obtained through the new design approach. (c) 2006 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Em todo o mundo se multiplicam centros de pesquisa, conferências, cursos, projetos de extensão, livros e revistas focados na interseção entre Neurociências e Educação. Em comum, estas iniciativas compartilham da crença de que os achados neurocientíficos podem contribuir para o aperfeiçoamento do processo educacional. Isto ocorreria de duas maneiras:fornecendo uma melhor compreensão da maneira como as pessoas aprendem e com isso,colaborando com a criação de políticas e práticas educacionais mais eficazes e; contribuindo com o entendimento das dificuldades ou transtornos de aprendizagem, de forma a fornecer subsídios para o desenvolvimento de abordagens e tratamentos mais efetivos para tais problemas. A neuroeducação, disciplina de interface entre os campos neurocientifico e educacional, compartilha dessa crença. Criada entre o final do século XX e o início do Século XXI, esta disciplina, por vezes chamada de Mente, Cérebro e Educação ou Ciência do Aprendizado, pretende simultaneamente construir um conhecimento sobre o aprendizado e desenvolver práticas pedagógicas. No presente trabalho pretendemos compreender especificamente como ocorre esta aproximação entre os campos neurocientífico e educacional no Brasil. Para tanto realizamos uma análise de diversos materiais (artigos, livros,revistas, vídeos) produzidos sobre o tema por neurocientistas, educadores e também por neuroeducadores, nova categoria profissional proposta pelo Instituto de Pesquisas em Neuroeducação. Constatamos que a neuroeducação vem se constituindo no país enquanto um campo extremamente multifacetado permeado por diversos discursos e práticas assim como por múltiplas versões do cérebro humano.
Resumo:
O crescimento dos serviços de banda-larga em redes de comunicações móveis tem provocado uma demanda por dados cada vez mais rápidos e de qualidade. A tecnologia de redes móveis chamada LTE (Long Term Evolution) ou quarta geração (4G) surgiu com o objetivo de atender esta demanda por acesso sem fio a serviços, como acesso à Internet, jogos online, VoIP e vídeo conferência. O LTE faz parte das especificações do 3GPP releases 8 e 9, operando numa rede totalmente IP, provendo taxas de transmissão superiores a 100 Mbps (DL), 50 Mbps (UL), baixa latência (10 ms) e compatibilidade com as versões anteriores de redes móveis, 2G (GSM/EDGE) e 3G (UMTS/HSPA). O protocolo TCP desenvolvido para operar em redes cabeadas, apresenta baixo desempenho sobre canais sem fio, como redes móveis celulares, devido principalmente às características de desvanecimento seletivo, sombreamento e às altas taxas de erros provenientes da interface aérea. Como todas as perdas são interpretadas como causadas por congestionamento, o desempenho do protocolo é ruim. O objetivo desta dissertação é avaliar o desempenho de vários tipos de protocolo TCP através de simulações, sob a influência de interferência nos canais entre o terminal móvel (UE User Equipment) e um servidor remoto. Para isto utilizou-se o software NS3 (Network Simulator versão 3) e os protocolos TCP Westwood Plus, New Reno, Reno e Tahoe. Os resultados obtidos nos testes mostram que o protocolo TCP Westwood Plus possui um desempenho melhor que os outros. Os protocolos TCP New Reno e Reno tiveram desempenho muito semelhante devido ao modelo de interferência utilizada ter uma distribuição uniforme e, com isso, a possibilidade de perdas de bits consecutivos é baixa em uma mesma janela de transmissão. O TCP Tahoe, como era de se esperar, apresentou o pior desempenho dentre todos, pois o mesmo não possui o mecanismo de fast recovery e sua janela de congestionamento volta sempre para um segmento após o timeout. Observou-se ainda que o atraso tem grande importância no desempenho dos protocolos TCP, mas até do que a largura de banda dos links de acesso e de backbone, uma vez que, no cenário testado, o gargalo estava presente na interface aérea. As simulações com erros na interface aérea, introduzido com o script de fading (desvanecimento) do NS3, mostraram que o modo RLC AM (com reconhecimento) tem um desempenho melhor para aplicações de transferência de arquivos em ambientes ruidosos do que o modo RLC UM sem reconhecimento.
Resumo:
Em que pese o papel fundamental da tributação no âmbito de um Estado Democrático de Direito (Estado Fiscal), no qual o dever de pagar tributos é considerado um dever fundamental, persistem na doutrina e jurisprudência nacionais posicionamentos que associam aos tributos um caráter odioso. Essa repulsa aos tributos decorre de uma ideologia ultraliberal que não se justifica à luz do sistema de direitos e garantias desenhado em nossa Constituição. Essa disseminada postura ideológica influencia de forma equivocada a interpretação e a aplicação de inúmeros institutos e normas tributárias, como ocorre em relação às sanções administrativas não pecuniárias (restritivas de direitos) utilizadas para punir o inadimplemento de uma obrigação tributária principal, denominadas de sanções políticas, morais ou indiretas. A presente dissertação busca analisar de forma crítica como a doutrina nacional e a jurisprudência histórica e atual dos nossos tribunais superiores vêm se posicionando acerca da constitucionalidade dessas sanções, de modo a apontar a inconsistência teórica do entendimento ainda prevalecente, seja à luz da teoria da sanção, seja à luz do neoconstitucionalismo, demonstrando ser juridicamente injustificável considerar inconstitucional de plano uma sanção tributária pelo só fato de ser não pecuniária, já que esse juízo demanda uma análise específica da proporcionalidade em cada caso, atentando-se para os princípios e circunstâncias envolvidos. Além de importantes julgados sobre o tema e posições de renomados autores, são analisadas separadamente as ADIs n. 5.135 e 5.161, que tratam, respectivamente, das polêmicas questões do protesto das Certidões da Dívida Ativa e da vedação da distribuição de lucros e bonificações em empresas com débito em aberto junto à União Federal.
Resumo:
Compared with the ordinary adaptive filter, the variable-length adaptive filter is more efficient (including smaller., lower power consumption and higher computational complexity output SNR) because of its tap-length learning algorithm, which is able to dynamically adapt its tap-length to the optimal tap-length that best balances the complexity and the performance of the adaptive filter. Among existing tap-length algorithms, the LMS-style Variable Tap-Length Algorithm (also called Fractional Tap-Length Algorithm or FT Algorithm) proposed by Y.Gong has the best performance because it has the fastest convergence rates and best stability. However, in some cases its performance deteriorates dramatically. To solve this problem, we first analyze the FT algorithm and point out some of its defects. Second, we propose a new FT algorithm called 'VSLMS' (Variable Step-size LMS) Style Tap-Length Learning Algorithm, which not only uses the concept of FT but also introduces a new concept of adaptive convergence slope. With this improvement the new FT algorithm has even faster convergence rates and better stability. Finally, we offer computer simulations to verify this improvement.
Resumo:
<正>The so-called one dimensional(1D) nanostructures or wirelike nanoentities,such as nanowire(NW),nanotube(NT),and nanobelt(NB) have attracted much interest in scientific community because of their remarkable mechanical,electrical,thermal properties and potential applications in wide variety of devices.The mechanical failure of 1D nanostructures can lead to the malfunction or even failure of entire device and 1D nanostructures may also have size-dependent properties. Therefore,an accurate measurement of their mechanical properties is of
Resumo:
To investigate temporal changes of water quality, a role of dinoflagellate cysts preserved in surface sediments was examined in Yokohama Port in Tokyo Bay, Japan. Two cores were collected, and sedimentation rates and ages of both were dated as approximately 1900 years or slightly older on the basis of 210Pb and 137Cs concentrations. The temporal change in dinoflagellate cyst assemblages in the two cores reflects eutrophication in Yokohama Port in the 1960s. Abrupt increases in the cysts of Gyrodinium instriatum cysts strongly suggests that a red tide was caused by this species around 1985. Dinoflagellate cyst assemblages in surface sediments appear to be good biomarkers of changes in the water quality of enclosed seas.
Resumo:
These simulation calculations for the oxygen-atom vacancy in the high temperature superconductor TlBa2Ca(n-1)Cu(n)O2n+2.5(n = 1) have been performed by means of the tight-binding approximation based on the EHMO method. The results indicate that the effect of the oxygen-atom vacancy on the charge distributions at the Tl-, Ba-, Cu- and O-atom sites is appreciably different and that there may exist two kinds of Cu cation with different net charges (approximately + 3.0 or approximately + 1.0) due to the oxygen-atom vacancy in the lattice. The electric field gradient at the site of the oxygen-atom vacancy has been calculated. The position of the oxygen-atom vacancy which favours the high temperature superconductivity of TlBa2Ca(n-1)Cu(n)O2n+2.5(n = 1) has been discussed.
Resumo:
Ordos Basin is a typical cratonic petroliferous basin with 40 oil-gas bearing bed sets. It is featured as stable multicycle sedimentation, gentle formation, and less structures. The reservoir beds in Upper Paleozoic and Mesozoicare are mainly low density, low permeability, strong lateral change, and strong vertical heterogeneous. The well-known Loess Plateau in the southern area and Maowusu Desert, Kubuqi Desert and Ordos Grasslands in the northern area cover the basin, so seismic data acquisition in this area is very difficult and the data often takes on inadequate precision, strong interference, low signal-noise ratio, and low resolution. Because of the complicated condition of the surface and the underground, it is very difficult to distinguish the thin beds and study the land facies high-resolution lithologic sequence stratigraphy according to routine seismic profile. Therefore, a method, which have clearly physical significance, based on advanced mathematical physics theory and algorithmic and can improve the precision of the detection on the thin sand-peat interbed configurations of land facies, is in demand to put forward.Generalized S Transform (GST) processing method provides a new method of phase space analysis for seismic data. Compared with wavelet transform, both of them have very good localization characteristics; however, directly related to the Fourier spectra, GST has clearer physical significance, moreover, GST adopts a technology to best approach seismic wavelets and transforms the seismic data into time-scale domain, and breaks through the limit of the fixed wavelet in S transform, so GST has extensive adaptability. Based on tracing the development of the ideas and theories from wavelet transform, S transform to GST, we studied how to improve the precision of the detection on the thin stratum by GST.Noise has strong influence on sequence detecting in GST, especially in the low signal-noise ratio data. We studied the distribution rule of colored noise in GST domain, and proposed a technology to distinguish the signal and noise in GST domain. We discussed two types of noises: white noise and red noise, in which noise satisfy statistical autoregression model. For these two model, the noise-signal detection technology based on GST all get good result. It proved that the GST domain noise-signal detection technology could be used to real seismic data, and could effectively avoid noise influence on seismic sequence detecting.On the seismic profile after GST processing, high amplitude energy intensive zone, schollen, strip and lentoid dead zone and disarray zone maybe represent specifically geologic meanings according to given geologic background. Using seismic sequence detection profile and combining other seismic interpretation technologies, we can elaborate depict the shape of palaeo-geomorphology, effectively estimate sand stretch, distinguish sedimentary facies, determine target area, and directly guide oil-gas exploration.In the lateral reservoir prediction in XF oilfield of Ordos Basin, it played very important role in the estimation of sand stretch that the study of palaeo-geomorphology of Triassic System and the partition of inner sequence of the stratum group. According to the high-resolution seismic profile after GST processing, we pointed out that the C8 Member of Yanchang Formation in DZ area and C8 Member in BM area are the same deposit. It provided the foundation for getting 430 million tons predicting reserves and unite building 3 million tons off-take potential.In tackling key problem study for SLG gas-field, according to the high-resolution seismic sequence profile, we determined that the deposit direction of H8 member is approximately N-S or NNE-SS W. Using the seismic sequence profile, combining with layer-level profile, we can interpret the shape of entrenched stream. The sunken lenticle indicates the high-energy stream channel, which has stronger hydropower. By this way we drew out three high-energy stream channels' outline, and determined the target areas for exploitation. Finding high-energy braided river by high-resolution sequence processing is the key technology in SLG area.In ZZ area, we studied the distribution of the main reservoir bed-S23, which is shallow delta thin sand bed, by GST processing. From the seismic sequence profile, we discovered that the schollen thick sand beds are only local distributed, and most of them are distributary channel sand and distributary bar deposit. Then we determined that the S23 sand deposit direction is NW-SE in west, N-S in central and NE-SW in east. The high detecting seismic sequence interpretation profiles have been tested by 14 wells, 2 wells mismatch and the coincidence rate is 85.7%. Based on the profiles we suggested 3 predicted wells, one well (Yu54) completed and the other two is still drilling. The completed on Is coincident with the forecastThe paper testified that GST is a effective technology to get high- resolution seismic sequence profile, compartmentalize deposit microfacies, confirm strike direction of sandstone and make sure of the distribution range of oil-gas bearing sandstone, and is the gordian technique for the exploration of lithologic gas-oil pool in complicated areas.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.