977 resultados para Appropriate Selection Processes Are Available For Choosing Hospitality Texts


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been only recently realized that sexual selection does not end at copulation but that post-copulatory processes are often important in determining the fitness of individuals. In this thesis, I experimentally studied both pre- and post-copulatory sexual selection in the least killifish, Heterandria formosa. I found that this species suffers from severe inbreeding depression in male reproductive behaviour, offspring viability and offspring maturation times. Neither sex showed pre-copulatory inbreeding avoidance but when females mated with their brothers, less sperm were retrieved from their reproductive system compared to the situation when females mated with unrelated males. Whether the difference in sperm numbers is due to female or male effect could not be resolved. Based on theory, females should be more eager to avoid inbreeding than males in this species, because females invest more in their offspring than males do. Inbreeding seems to be an important part of this species biology and the severe inbreeding depression has most likely selected for the evolution of the post-copulatory inbreeding avoidance mechanism that I found. In addition, I studied the effects of polyandry on female reproductive success. When females mated with more than one male, they were more likely to get pregnant. However, I also found a cost of polyandry. The offspring of females mated to four males took longer to reach sexual maturity compared to the offspring of monandrous females. This cost may be explained by parent-offspring conflict over maternal resource allocation. In another experiment, in which within-brood relatedness was manipulated, offspring sizes decreased over time when within-brood relatedness was low. This result is partly in accordance with the kinship theory of genomic imprinting. When relatedness decreases, offspring are expected to be less co-operative and demand fewer resources from their mother, which leads to impaired development. In the last chapter of my thesis, I show that H. formosa males do not prefer large females as in other Poeciliidae species. I suggest that males view smaller females as more profitable mates because those are more likely virgin. In conclusion, I found both pre- and post-copulatory sexual selection to be important factors in determining reproductive success in H. formosa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study focuses on self-employed industrial designers and how they emerge new venture ideas. More specifically, this study strives to determine what design entrepreneurs do when they create new venture ideas, how venture ideas are nurtured into being, and how the processes are organized to bring such ideas to the market in the given industrial context. In contemporary times when the concern for the creative class is peaking, the research and business communities need more insight of the kind that this study provides, namely how professionals may contribute to their entrepreneurial processes and other agents’ business processes. On the one hand, the interviews underlying this study suggest that design entrepreneurs may act as reactive service providers who are appointed by producers or marketing parties to generate product-related ideas on their behalf. On the other hand, the interviews suggest that proactive behaviour that aims on generating own venture ideas, may force design entrepreneurs to take considerable responsibility in organizing their entrepreneurial processes. Another option is that they strive to bring venture ideas to the market in collaboration, or by passing these to other agents’ product development processes. Design entrepreneurs’ venture ideas typically emerge from design related starting points and observations. Product developers are mainly engaged with creating their own ideas, whereas service providers refer mainly to the development of other agents’ venture ideas. In contrast with design entrepreneurs, external actors commonly emphasize customer demand as their primary source for new venture ideas, as well as development of these in close interaction with available means of production and marketing. Consequently, design entrepreneurs need to address market demand since without sales their venture ideas may as well be classified as art. In case, they want to experiment with creative ideas, then there should be another source of income to support this typically uncertain and extensive process. Currently, it appears like a lot of good venture ideas and resources are being wasted, when venture ideas do not suite available production or business procedures. Sufficient communication between design entrepreneurs and other agents would assist all parties in developing production efficient and distributable venture ideas. Overall, the findings suggest that design entrepreneurs are often involved simultaneously in several processes that aim at emerging new product related ventures. Consequently, design entrepreneurship is conceptualized in this study as a dual process. This implies that design entrepreneurs can simultaneously be in charge of their entrepreneurial processes, as they operate as resources in other agents’ business processes. The interconnection between activities and agents suggests that these kinds of processes tend to be both complex and multifaceted to their nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Receive antenna selection (AS) reduces the hardware complexity of multi-antenna receivers by dynamically connecting an instantaneously best antenna element to the available radio frequency (RF) chain. Due to the hardware constraints, the channels at various antenna elements have to be sounded sequentially to obtain estimates that are required for selecting the ``best'' antenna and for coherently demodulating data. Consequently, the channel state information at different antennas is outdated by different amounts. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, the channel estimates of different antennas should be weighted differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then derive an explicit formula for the optimal selection weights that minimize the SEP. We find that when selection weights are not used, the SEP need not improve as the number of antenna elements increases, which is in contrast to the ideal channel estimation case. However, the optimal selection weights remedy this situation and significantly improve performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Steady two-dimensional and axisymmetric compressible nonsimilar laminar boundary-layer flows with non-uniform slot injection (or suction) and non-uniform wall enthalpy have been studied from the starting point of the streamwise co-ordinate to the exact point of separation. The effect of different free stream Mach number has also been considered. The finite discontinuities arising at the leading and trailing edges of the slot for the uniform slot injection (suction) or wall enthalpy are removed by choosing appropriate non-uniform slot injection (suction) or wall enthalpy. The difficulties arising at the starting point of the streamwise co-ordinate, at the edges of the slot and at the point of separation are overcome by applying the method of quasilinear implicit finite difference scheme with an appropriate selection of finer step size along the streamwise direction. It is observed that the non-uniform slot injection moves the point of separation downstream but the non-uniform slot suction has the reverse effect. The increase of Mach number shifts the point of separation upstream due to the adverse pressure gradient. The increase of total enthalpy at the wall causes the separation to occur earlier while cooling delays it. The non-uniform total enthalpy at the wall (i.e., the cooling or heating of the wall in a slot) along the streamwise co-ordinate has very little effect on the skin friction and thus on the point of separation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opportunistic relay selection in a multiple source-destination (MSD) cooperative system requires quickly allocating to each source-destination (SD) pair a suitable relay based on channel gains. Since the channel knowledge is available only locally at a relay and not globally, efficient relay selection algorithms are needed. For an MSD system, in which the SD pairs communicate in a time-orthogonal manner with the help of decode-and-forward relays, we propose three novel relay selection algorithms, namely, contention-free en masse assignment (CFEA), contention-based en masse assignment (CBEA), and a hybrid algorithm that combines the best features of CFEA and CBEA. En masse assignment exploits the fact that a relay can often aid not one but multiple SD pairs, and, therefore, can be assigned to multiple SD pairs. This drastically reduces the average time required to allocate an SD pair when compared to allocating the SD pairs one by one. We show that the algorithms are much faster than other selection schemes proposed in the literature and yield significantly higher net system throughputs. Interestingly, CFEA is as effective as CBEA over a wider range of system parameters than in single SD pair systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Albacore and Atlantic Bluefin tuna are two pelagic fish. Atlantic Bluefin tuna is included in the IUCN red list of threatened species and albacore is considered to be near threatened, so conservation plans are needed. However, no genomic resources are available for any of them. In this study, to better understand their transcriptome we functionally annotated orthologous genes. In all, 159 SNPs distributed in 120 contigs of the muscle transcriptome were analyzed. Genes were predicted for 98 contigs (81.2%) using the bioinformatics tool BLAST. In addition, another bioinformatics tool, BLAST2GO was used in order to achieve GO terms for the genes, in which 41 sequences were given a biological process, and 39 sequences were given a molecular process. The most repeated biological process was metabolism and it is important that no cellular process was given in any of the sequences. The most abundant molecular process was binding and very few catalytic activity processes were given. From the initial 159 SNPs, 40 were aligned with a sequence in the database after BLAST2GO was run, and were polymorphic in Atlantic Bluefin tuna and monomorphic in albacore. From these 40 SNPs, 24 were located in an open reading frame of which four were non-synonymous and 20 were synonymous and 16 were not located in a known open reading frame,. This study provides information for better understanding the ecology and evolution of these species and this is important in order to establish a proper conservation plan and an appropriate management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese desenvolve estudo do ensino de Redação com análise de textos discentes, cujo marco regulatório para a produção das aulas são as Competências e Habilidades inscritas nos PCN. Objetiva-se a correção de redações por meio digital que (1) identifica os problemas do texto produzido pelo redator, (2) fornece instruções para correção do problema apontado, (3) propõe retextualização para nova correção e (4) utiliza os desvios apontados como orientadores das aulas de Gramática. O trabalho final tem por objetivo principal ensinar a redigir no registro padrão por meio da correção comentada de textos produzidos por clientes, que podem ser alunos ou não, uma vez que o trabalho ocorre pela Internet (à distância), com ferramentas do MS Word. A instrução gramatical incidente sobre problemas detectados no texto é subsidiada por instruções semióticas de cunho didático-pedagógico. Testado em turmas regulares do Ensino Médio, em Cursos Preparatórios para processos seletivos, este projeto constatou sua eficácia, pois os redatores (1) compreenderam o quê e por que erraram, (2) assimilaram os mecanismos linguísticos de correção/adequação e (3) praticaram a substituição de estruturas, orientados por princípios da Teoria da Iconicidade Verbal (SIMÕES, [1994], 2009). Assim, os sujeitos foram-se apropriando das regras de estruturação da língua, por meio das quais se tornou possível a produção de textos eficientes, adequados à situação de comunicação. Paralelamente foram-se formando leitores críticos. A negociação de textos, a técnica de correção digital, mediada pela Internet e por isso realizada à distância (mesmo quando combinada com aulas presenciais)e a forma da instrução semiótico-gramatical é o que se pretende demonstrar nesta tese

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaluating the mechanical properties of rock masses is the base of rock engineering design and construction. It has great influence on the safety and cost of rock project. The recognition is inevitable consequence of new engineering activities in rock, including high-rise building, super bridge, complex underground installations, hydraulic project and etc. During the constructions, lots of engineering accidents happened, which bring great damage to people. According to the investigation, many failures are due to choosing improper mechanical properties. ‘Can’t give the proper properties’ becomes one of big problems for theoretic analysis and numerical simulation. Selecting the properties reasonably and effectively is very significant for the planning, design and construction of rock engineering works. A multiple method based on site investigation, theoretic analysis, model test, numerical test and back analysis by artificial neural network is conducted to determine and optimize the mechanical properties for engineering design. The following outcomes are obtained: (1) Mapping of the rock mass structure Detailed geological investigation is the soul of the fine structure description. Based on statistical window,geological sketch and digital photography,a new method for rock mass fine structure in-situ mapping is developed. It has already been taken into practice and received good comments in Baihetan Hydropower Station. (2) Theoretic analysis of rock mass containing intermittent joints The shear strength mechanisms of joint and rock bridge are analyzed respectively. And the multiple modes of failure on different stress condition are summarized and supplied. Then, through introducing deformation compatibility equation in normal direction, the direct shear strength formulation and compression shear strength formulation for coplanar intermittent joints, as well as compression shear strength formulation for ladderlike intermittent joints are deducted respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. (3) Model test of rock mass containing intermittent joints Model tests are adopted to study the mechanical mechanism of joints to rock masses. The failure modes of rock mass containing intermittent joints are summarized from the model test. Six typical failure modes are found in the test, and brittle failures are the main failure mode. The evolvement processes of shear stress, shear displacement, normal stress and normal displacement are monitored by using rigid servo test machine. And the deformation and failure character during the loading process is analyzed. According to the model test, the failure modes quite depend on the joint distribution, connectivity and stress states. According to the contrastive analysis of complete stress strain curve, different failure developing stages are found in the intact rock, across jointed rock mass and intermittent jointed rock mass. There are four typical stages in the stress strain curve of intact rock, namely shear contraction stage, linear elastic stage, failure stage and residual strength stage. There are three typical stages in the across jointed rock mass, namely linear elastic stage, transition zone and sliding failure stage. Correspondingly, five typical stages are found in the intermittent jointed rock mass, namely linear elastic stage, sliding of joint, steady growth of post-crack, joint coalescence failure, and residual strength. According to strength analysis, the failure envelopes of intact rock and across jointed rock mass are the upper bound and lower bound separately. The strength of intermittent jointed rock mass can be evaluated by reducing the bandwidth of the failure envelope with geo-mechanics analysis. (4) Numerical test of rock mass Two sets of methods, i.e. the distinct element method (DEC) based on in-situ geology mapping and the realistic failure process analysis (RFPA) based on high-definition digital imaging, are developed and introduced. The operation process and analysis results are demonstrated detailedly from the research on parameters of rock mass based on numerical test in the Jinping First Stage Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Then the applicable fields are figured out respectively. (5) Intelligent evaluation based on artificial neural network (ANN) The characters of both ANN and parameter evaluation of rock mass are discussed and summarized. According to the investigations, ANN has a bright application future in the field of parameter evaluation of rock mass. Intelligent evaluation of mechanical parameters in the Jinping First Stage Hydropower Station is taken as an example to demonstrate the analysis process. The problems in five aspects, i. e. sample selection, network design, initial value selection, learning rate and expected error, are discussed detailedly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Intelligence and Feature Selection provides a high level audience with both the background and fundamental ideas behind feature selection with an emphasis on those techniques based on rough and fuzzy sets, including their hybridizations. It introduces set theory, fuzzy set theory, rough set theory, and fuzzy-rough set theory, and illustrates the power and efficacy of the feature selections described through the use of real-world applications and worked examples. Program files implementing major algorithms covered, together with the necessary instructions and datasets, are available on the Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of focal therapy is rapidly evolving and gaining popularity from both physician and patient perspectives. We review the rationale, candidate selection, and results of the first clinical studies of focal cryoablation for selected patients with low volume and low- to low-moderate-risk features of prostate cancer as an alternative to whole-gland treatment. In spite of improved understanding of the tumor biology of early stage disease, we currently have limited tools to select appropriate patients with low- to low-moderate risk unifocal or unilateral prostate cancer who may be amenable to focal therapy. From a technical point, a number of ablative treatment options for focal therapy are available, with cryoablation having the most clinical experience. Recently, several reports have been published from single and multi-institutional studies that discuss focal therapy as a reasonable balance between cancer control and quality-of-life outcomes. Retrospective pathologic data from large prostatectomy series, however, do not clearly reveal valid and reproducible criteria to select appropriate candidates for focal cryoablation because of the complexity of tumorigenesis in early stage disease. At this time, a more feasible option remains hemiablation of the prostate with reasonable certainty about the absence of clinically significant cancer lesion(s) on the contralateral side of the prostate based on three-dimensional transperineal prostate biopsy mapping studies. Minimally invasive, parenchyma-preserving cryoablation can be considered as a potential feasible option in the treatment armamentarium of early stage, localized prostate cancer in appropriately selected candidates. There is a need to further test this technique in randomized, multicenter clinical trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss a general approach to dynamic sparsity modeling in multivariate time series analysis. Time-varying parameters are linked to latent processes that are thresholded to induce zero values adaptively, providing natural mechanisms for dynamic variable inclusion/selection. We discuss Bayesian model specification, analysis and prediction in dynamic regressions, time-varying vector autoregressions, and multivariate volatility models using latent thresholding. Application to a topical macroeconomic time series problem illustrates some of the benefits of the approach in terms of statistical and economic interpretations as well as improved predictions. Supplementary materials for this article are available online. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growth and proliferation of invasive bacteria in engineered systems is an ongoing problem. While there are a variety of physical and chemical processes to remove and inactivate bacterial pathogens, there are many situations in which these tools are no longer effective or appropriate for the treatment of a microbial target. For example, certain strains of bacteria are becoming resistant to commonly used disinfectants, such as chlorine and UV. Additionally, the overuse of antibiotics has contributed to the spread of antibiotic resistance, and there is concern that wastewater treatment processes are contributing to the spread of antibiotic resistant bacteria.

Due to the continually evolving nature of bacteria, it is difficult to develop methods for universal bacterial control in a wide range of engineered systems, as many of our treatment processes are static in nature. Still, invasive bacteria are present in many natural and engineered systems, where the application of broad acting disinfectants is impractical, because their use may inhibit the original desired bioprocesses. Therefore, to better control the growth of treatment resistant bacteria and to address limitations with the current disinfection processes, novel tools that are both specific and adaptable need to be developed and characterized.

In this dissertation, two possible biological disinfection processes were investigated for use in controlling invasive bacteria in engineered systems. First, antisense gene silencing, which is the specific use of oligonucleotides to silence gene expression, was investigated. This work was followed by the investigation of bacteriophages (phages), which are viruses that are specific to bacteria, in engineered systems.

For the antisense gene silencing work, a computational approach was used to quantify the number of off-targets and to determine the effects of off-targets in prokaryotic organisms. For the organisms of Escherichia coli K-12 MG1655 and Mycobacterium tuberculosis H37Rv the mean number of off-targets was found to be 15.0 + 13.2 and 38.2 + 61.4, respectively, which results in a reduction of greater than 90% of the effective oligonucleotide concentration. It was also demonstrated that there was a high variability in the number of off-targets over the length of a gene, but that on average, there was no general gene location that could be targeted to reduce off-targets. Therefore, this analysis needs to be performed for each gene in question. It was also demonstrated that the thermodynamic binding energy between the oligonucleotide and the mRNA accounted for 83% of the variation in the silencing efficiency, compared to the number of off-targets, which explained 43% of the variance of the silencing efficiency. This suggests that optimizing thermodynamic parameters must be prioritized over minimizing the number of off-targets. In conclusion for the antisense work, these results suggest that off-target hybrids can account for a greater than 90% reduction in the concentration of the silencing oligonucleotides, and that the effective concentration can be increased through the rational design of silencing targets by minimizing off-target hybrids.

Regarding the work with phages, the disinfection rates of bacteria in the presence of phages was determined. The disinfection rates of E. coli K12 MG1655 in the presence of coliphage Ec2 ranged up to 2 h-1, and were dependent on both the initial phage and bacterial concentrations. Increasing initial phage concentrations resulted in increasing disinfection rates, and generally, increasing initial bacterial concentrations resulted in increasing disinfection rates. However, disinfection rates were found to plateau at higher bacterial and phage concentrations. A multiple linear regression model was used to predict the disinfection rates as a function of the initial phage and bacterial concentrations, and this model was able to explain 93% of the variance in the disinfection rates. The disinfection rates were also modeled with a particle aggregation model. The results from these model simulations suggested that at lower phage and bacterial concentrations there are not enough collisions to support active disinfection rates, which therefore, limits the conditions and systems where phage based bacterial disinfection is possible. Additionally, the particle aggregation model over predicted the disinfection rates at higher phage and bacterial concentrations of 108 PFU/mL and 108 CFU/mL, suggesting other interactions were occurring at these higher concentrations. Overall, this work highlights the need for the development of alternative models to more accurately describe the dynamics of this system at a variety of phage and bacterial concentrations. Finally, the minimum required hydraulic residence time was calculated for a continuous stirred-tank reactor and a plug flow reactor (PFR) as a function of both the initial phage and bacterial concentrations, which suggested that phage treatment in a PFR is theoretically possible.

In addition to determining disinfection rates, the long-term bacterial growth inhibition potential was determined for a variety of phages with both Gram-negative and Gram-positive bacteria. It was determined, that on average, phages can be used to inhibit bacterial growth for up to 24 h, and that this effect was concentration dependent for various phages at specific time points. Additionally, it was found that a phage cocktail was no more effective at inhibiting bacterial growth over the long-term than the best performing phage in isolation.

Finally, for an industrial application, the use of phages to inhibit invasive Lactobacilli in ethanol fermentations was investigated. It was demonstrated that phage 8014-B2 can achieve a greater than 3-log inactivation of Lactobacillus plantarum during a 48 h fermentation. Additionally, it was shown that phages can be used to protect final product yields and maintain yeast viability. Through modeling the fermentation system with differential equations it was determined that there was a 10 h window in the beginning of the fermentation run, where the addition of phages can be used to protect final product yields, and after 20 h no additional benefit of the phage addition was observed.

In conclusion, this dissertation improved the current methods for designing antisense gene silencing targets for prokaryotic organisms, and characterized phages from an engineering perspective. First, the current design strategy for antisense targets in prokaryotic organisms was improved through the development of an algorithm that minimized the number of off-targets. For the phage work, a framework was developed to predict the disinfection rates in terms of the initial phage and bacterial concentrations. In addition, the long-term bacterial growth inhibition potential of multiple phages was determined for several bacteria. In regard to the phage application, phages were shown to protect both final product yields and yeast concentrations during fermentation. Taken together, this work suggests that the rational design of phage treatment is possible and further work is needed to expand on this foundation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although many feature selection methods for classification have been developed, there is a need to identify genes in high-dimensional data with censored survival outcomes. Traditional methods for gene selection in classification problems have several drawbacks. First, the majority of the gene selection approaches for classification are single-gene based. Second, many of the gene selection procedures are not embedded within the algorithm itself. The technique of random forests has been found to perform well in high-dimensional data settings with survival outcomes. It also has an embedded feature to identify variables of importance. Therefore, it is an ideal candidate for gene selection in high-dimensional data with survival outcomes. In this paper, we develop a novel method based on the random forests to identify a set of prognostic genes. We compare our method with several machine learning methods and various node split criteria using several real data sets. Our method performed well in both simulations and real data analysis.Additionally, we have shown the advantages of our approach over single-gene-based approaches. Our method incorporates multivariate correlations in microarray data for survival outcomes. The described method allows us to better utilize the information available from microarray data with survival outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

UNLABELLED: Response inhibition is a key component of executive control, but its relation to other cognitive processes is not well understood. We recently documented the "inhibition-induced forgetting effect": no-go cues are remembered more poorly than go cues. We attributed this effect to central-resource competition, whereby response inhibition saps attention away from memory encoding. However, this proposal is difficult to test with behavioral means alone. We therefore used fMRI in humans to test two neural predictions of the "common resource hypothesis": (1) brain regions associated with response inhibition should exhibit greater resource demands during encoding of subsequently forgotten than remembered no-go cues; and (2) this higher inhibitory resource demand should lead to memory encoding regions having less resources available during encoding of subsequently forgotten no-go cues. Participants categorized face stimuli by gender in a go/no-go task and, following a delay, performed a surprise recognition memory test for those faces. Replicating previous findings, memory was worse for no-go than for go stimuli. Crucially, forgetting of no-go cues was predicted by high inhibitory resource demand, as quantified by the trial-by-trial ratio of activity in neural "no-go" versus "go" networks. Moreover, this index of inhibitory demand exhibited an inverse trial-by-trial relationship with activity in brain regions responsible for the encoding of no-go cues into memory, notably the ventrolateral prefrontal cortex. This seesaw pattern between the neural resource demand of response inhibition and activity related to memory encoding directly supports the hypothesis that response inhibition temporarily saps attentional resources away from stimulus processing. SIGNIFICANCE STATEMENT: Recent behavioral experiments showed that inhibiting a motor response to a stimulus (a "no-go cue") impairs subsequent memory for that cue. Here, we used fMRI to test whether this "inhibition-induced forgetting effect" is caused by competition for neural resources between the processes of response inhibition and memory encoding. We found that trial-by-trial variations in neural inhibitory resource demand predicted subsequent forgetting of no-go cues and that higher inhibitory demand was furthermore associated with lower concurrent activation in brain regions responsible for successful memory encoding of no-go cues. Thus, motor inhibition and stimulus encoding appear to compete with each other: when more resources have to be devoted to inhibiting action, less are available for encoding sensory stimuli.