917 resultados para Computational Dialectometry


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies on tidal water table dynamics in unconfined coastal aquifers have focused on the inland propagation of oceanic tides in the cross-shore direction based on the assumption of a straight coastline. Here, two-dimensional analytical solutions are derived to study the effects of rhythmic coastlines on tidal water table fluctuations. The computational results demonstrate that the alongshore variations of the coastline can affect the water table behavior significantly, especially in areas near the centers of the headland and embayment. With the coastline shape effects ignored, traditional analytical solutions may lead to large errors in predicting coastal water table fluctuations or in estimating the aquifer's properties based on these signals. The conditions under which the coastline shape needs to be considered are derived from the new analytical solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we refer to the gene-to-phenotype modeling challenge as the GP problem. Integrating information across levels of organization within a genotype-environment system is a major challenge in computational biology. However, resolving the GP problem is a fundamental requirement if we are to understand and predict phenotypes given knowledge of the genome and model dynamic properties of biological systems. Organisms are consequences of this integration, and it is a major property of biological systems that underlies the responses we observe. We discuss the E(NK) model as a framework for investigation of the GP problem and the prediction of system properties at different levels of organization. We apply this quantitative framework to an investigation of the processes involved in genetic improvement of plants for agriculture. In our analysis, N genes determine the genetic variation for a set of traits that are responsible for plant adaptation to E environment-types within a target population of environments. The N genes can interact in epistatic NK gene-networks through the way that they influence plant growth and development processes within a dynamic crop growth model. We use a sorghum crop growth model, available within the APSIM agricultural production systems simulation model, to integrate the gene-environment interactions that occur during growth and development and to predict genotype-to-phenotype relationships for a given E(NK) model. Directional selection is then applied to the population of genotypes, based on their predicted phenotypes, to simulate the dynamic aspects of genetic improvement by a plant-breeding program. The outcomes of the simulated breeding are evaluated across cycles of selection in terms of the changes in allele frequencies for the N genes and the genotypic and phenotypic values of the populations of genotypes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivation: A major issue in cell biology today is how distinct intracellular regions of the cell, like the Golgi Apparatus, maintain their unique composition of proteins and lipids. The cell differentially separates Golgi resident proteins from proteins that move through the organelle to other subcellular destinations. We set out to determine if we could distinguish these two types of transmembrane proteins using computational approaches. Results: A new method has been developed to predict Golgi membrane proteins based on their transmembrane domains. To establish the prediction procedure, we took the hydrophobicity values and frequencies of different residues within the transmembrane domains into consideration. A simple linear discriminant function was developed with a small number of parameters derived from a dataset of Type II transmembrane proteins of known localization. This can discriminate between proteins destined for Golgi apparatus or other locations (post-Golgi) with a success rate of 89.3% or 85.2%, respectively on our redundancy-reduced data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intervalley interference between degenerate conduction band minima has been shown to lead to oscillations in the exchange energy between neighboring phosphorus donor electron states in silicon [B. Koiller, X. Hu, and S. Das Sarma, Phys. Rev. Lett. 88, 027903 (2002); Phys. Rev. B 66, 115201 (2002)]. These same effects lead to an extreme sensitivity of the exchange energy on the relative orientation of the donor atoms, an issue of crucial importance in the construction of silicon-based spin quantum computers. In this article we calculate the donor electron exchange coupling as a function of donor position incorporating the full Bloch structure of the Kohn-Luttinger electron wave functions. It is found that due to the rapidly oscillating nature of the terms they produce, the periodic part of the Bloch functions can be safely ignored in the Heitler-London integrals as was done by Koiller, Hu, and Das Sarma, significantly reducing the complexity of calculations. We address issues of fabrication and calculate the expected exchange coupling between neighboring donors that have been implanted into the silicon substrate using an 15 keV ion beam in the so-called top down fabrication scheme for a Kane solid-state quantum computer. In addition, we calculate the exchange coupling as a function of the voltage bias on control gates used to manipulate the electron wave functions and implement quantum logic operations in the Kane proposal, and find that these gate biases can be used to both increase and decrease the magnitude of the exchange coupling between neighboring donor electrons. The zero-bias results reconfirm those previously obtained by Koiller, Hu, and Das Sarma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed a computational strategy to identify the set of soluble proteins secreted into the extracellular environment of a cell. Within the protein sequences predominantly derived from the RIKEN representative transcript and protein set, we identified 2033 unique soluble proteins that are potentially secreted from the cell. These proteins contain a signal peptide required for entry into the secretory pathway and lack any transmembrane domains or intracellular localization signals. This class of proteins, which we have termed the mouse secretome, included >500 novel proteins and 92 proteins

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we examine the effects of varying several experimental parameters in the Kane quantum computer architecture: A-gate voltage, the qubit depth below the silicon oxide barrier, and the back gate depth to explore how these variables affect the electron density of the donor electron. In particular, we calculate the resonance frequency of the donor nuclei as a function of these parameters. To do this we calculated the donor electron wave function variationally using an effective-mass Hamiltonian approach, using a basis of deformed hydrogenic orbitals. This approach was then extended to include the electric-field Hamiltonian and the silicon host geometry. We found that the phosphorous donor electron wave function was very sensitive to all the experimental variables studied in our work, and thus to optimize the operation of these devices it is necessary to control all parameters varied in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the buckling phenomenon of a tubular truss with unsupported length through a full-scale test and presents a practical computational method for the design of the trusses allowing for the contribution of torsional stiffness against buckling, of which the effect has never been considered previously by others. The current practice for the design of a planar truss has largely been based on the linear elastic approach which cannot allow for the contribution of torsional stiffness and tension members in a structural system against buckling. The over-simplified analytical technique is unable to provide a realistic and an economical design to a structure. In this paper the stability theory is applied to the second-order analysis and design of the structural form, with detailed allowance for the instability and second-order effects in compliance with design code requirements. Finally, the paper demonstrates the application of the proposed method to the stability design of a commonly adopted truss system used in support of glass panels in which lateral bracing members are highly undesirable for economical and aesthetic reasons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work discusses the use of optical flow to generate the sensorial information a mobile robot needs to react to the presence of obstacles when navigating in a non-structured environment. A sensing system based on optical flow and time-to-collision calculation is here proposed and experimented, which accomplishes two important paradigms. The first one is that all computations are performed onboard the robot, in spite of the limited computational capability available. The second one is that the algorithms for optical flow and time-to-collision calculations are fast enough to give the mobile robot the capability of reacting to any environmental change in real-time. Results of real experiments in which the sensing system here proposed is used as the only source of sensorial data to guide a mobile robot to avoid obstacles while wandering around are presented, and the analysis of such results allows validating the proposed sensing system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Talvez não seja nenhum exagero afirmar que há quase um consenso entre os praticantes da Termoeconomia de que a exergia, ao invés de só entalpia, seja a magnitude Termodinâmica mais adequada para ser combinada com o conceito de custo na modelagem termoeconômica, pois esta leva em conta aspectos da Segunda Lei da Termodinâmica e permite identificar as irreversibilidades. Porém, muitas vezes durante a modelagem termoeconômica se usa a exergia desagregada em suas parcelas (química, térmica e mecânica), ou ainda, se inclui a neguentropia que é um fluxo fictício, permitindo assim a desagregação do sistema em seus componentes (ou subsistemas) visando melhorar e detalhar a modelagem para a otimização local, diagnóstico e alocação dos resíduos e equipamentos dissipativos. Alguns autores também afirmam que a desagregação da exergia física em suas parcelas (térmica e mecânica) permite aumentar a precisão dos resultados na alocação de custos, apesar de fazer aumentar a complexidade do modelo termoeconômico e consequentemente os custos computacionais envolvidos. Recentemente alguns autores apontaram restrições e possíveis inconsistências do uso da neguentropia e deste tipo de desagregação da exergia física, propondo assim alternativas para o tratamento de resíduos e equipamentos dissipativos que permitem a desagregação dos sistemas em seus componentes. Estas alternativas consistem, basicamente, de novas propostas de desagregação da exergia física na modelagem termoeconômica. Sendo assim, este trabalho tem como objetivo avaliar as diferentes metodologias de desagregação da exergia física para a modelagem termoeconômica, tendo em conta alguns aspectos como vantagens, restrições, inconsistências, melhoria na precisão dos resultados, aumento da complexidade e do esforço computacional e o tratamento dos resíduos e equipamentos dissipativos para a total desagregação do sistema térmico. Para isso, as diferentes metodologias e níveis de desagregação da exergia física são aplicados na alocação de custos para os produtos finais (potência líquida e calor útil) em diferentes plantas de cogeração considerando como fluido de trabalho tanto o gás ideal bem como o fluido real. Plantas essas com equipamentos dissipativos (condensador ou válvula) ou resíduos (gases de exaustão da caldeira de recuperação). Porém, foi necessário que uma das plantas de cogeração não incorporasse equipamentos dissipativos e nem caldeira de recuperação com o intuito de avaliar isoladamente o efeito da desagregação da exergia física na melhoria da precisão dos resultados da alocação de custos para os produtos finais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho investigou o problema da modelagem da dispersão de compostos odorantes em presença de obstáculos (cúbicos e com forma complexa) sob condição de estabilidade atmosférica neutra. Foi empregada modelagem numérica baseada nas equações de transporte (CFD1) bem como em modelos algébricos baseados na pluma Gausseana (AERMOD2, CALPUFF3 e FPM4). Para a validação dos resultados dos modelos e a avaliação do seu desempenho foram empregados dados de experimentos em túnel de vento e em campo. A fim de incluir os efeitos da turbulência atmosférica na dispersão, dois diferentes modelos de sub-malha associados à Simulação das Grandes Escalas (LES5) foram investigados (Smagorinsky dinâmico e WALE6) e, para a inclusão dos efeitos de obstáculos na dispersão nos modelos Gausseanos, foi empregado o modelo PRIME7. O uso do PRIME também foi proposto para o FPM como uma inovação. De forma geral, os resultados indicam que o uso de CFD/LES é uma ferramenta útil para a investigação da dispersão e o impacto de compostos odorantes em presença de obstáculos e também para desenvolvimento dos modelos Gausseanos. Os resultados também indicam que o modelo FPM proposto, com a inclusão dos efeitos do obstáculo baseado no PRIME também é uma ferramenta muito útil em modelagem da dispersão de odores devido à sua simplicidade e fácil configuração quando comparado a modelos mais complexos como CFD e mesmo os modelos regulatórios AERMOD e CALPUFF. A grande vantagem do FPM é a possibilidade de estimar-se o fator de intermitência e a relação pico-média (P/M), parâmetros úteis para a avaliação do impacto de odores. Os resultados obtidos no presente trabalho indicam que a determinação dos parâmetros de dispersão para os segmentos de pluma, bem como os parâmetros de tempo longo nas proximidades da fonte e do obstáculo no modelo FPM pode ser melhorada e simulações CFD podem ser usadas como uma ferramenta de desenvolvimento para este propósito. Palavras chave: controle de odor, dispersão, fluidodinâmica computacional, modelagem matemática, modelagem gaussiana de pluma flutuante, simulação de grandes vórtices (LES).