890 resultados para Time Trade Off


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Changes in patterns and magnitudes of integration may influence the ability of a species to respond to selection. Consequently, modularity has often been linked to the concept of evolvability, but their relationship has rarely been tested empirically. One possible explanation is the lack of analytical tools to compare patterns and magnitudes of integration among diverse groups that explicitly relate these aspects to the quantitative genetics framework. We apply such framework here using the multivariate response to selection equation to simulate the evolutionary behavior of several mammalian orders in terms of their flexibility, evolvability and constraints in the skull. We interpreted these simulation results in light of the integration patterns and magnitudes of the same mammalian groups, described in a companion paper. We found that larger magnitudes of integration were associated with a blur of the modules in the skull and to larger portions of the total variation explained by size variation, which in turn can exert a strong evolutionary constraint, thus decreasing the evolutionary flexibility. Conversely, lower overall magnitudes of integration were associated with distinct modules in the skull, to smaller fraction of the total variation associated with size and, consequently, to weaker constraints and more evolutionary flexibility. Flexibility and constraints are, therefore, two sides of the same coin and we found them to be quite variable among mammals. Neither the overall magnitude of morphological integration, the modularity itself, nor its consequences in terms of constraints and flexibility, were associated with absolute size of the organisms, but were strongly associated with the proportion of the total variation in skull morphology captured by size. Therefore, the history of the mammalian skull is marked by a trade-off between modularity and evolvability. Our data provide evidence that, despite the stasis in integration patterns, the plasticity in the magnitude of integration in the skull had important consequences in terms of evolutionary flexibility of the mammalian lineages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Laboratory strains and natural isolates of Escherichia coli differ in their level of stress resistance due to strain variation in the level of the sigma factor sigma(S) (or RpoS), the transcriptional master controller of the general stress response. We found that the high level of RpoS in one laboratory strain (MC4100) was partially dependent on an elevated basal level of ppGpp, an alarmone responding to stress and starvation. The elevated ppGpp was caused by two mutations in spoT, a gene associated with ppGpp synthesis and degradation. The nature of the spoT allele influenced the level of ppGpp in both MC4100 and another commonly used K-12 strain, MG1655. Introduction of the spoT mutation into MG1655 also resulted in an increased level of RpoS, but the amount of RpoS was lower in MG1655 than in MC4100 with either the wild-type or mutant spoT allele. In both MC4100 and MG1655, high ppGpp concentration increased RpoS levels, which in turn reduced growth with poor carbon sources like acetate. The growth inhibition resulting from elevated ppGpp was relieved by rpoS mutations. The extent of the growth inhibition by ppGpp, as well as the magnitude of the relief by rpoS mutations, differed between MG1655 and MC4100. These results together suggest that spoT mutations represent one of several polymorphisms influencing the strain variation of RpoS levels. Stress resistance was higher in strains with the spoT mutation, which is consistent with the conclusion that microevolution affecting either or both ppGpp and RpoS can reset the balance between self-protection and nutritional capability, the SPANC balance, in individual strains of E coli.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the work reported here we were able to control the photobleaching of poly[2-methoxy-5-(2`-ethyl-hexyloxy)-1,4-phenylene vinylene] (MEH-PPV), excited by two-photon absorption, using femtosecond pulse shaping. By applying a cosine-like spectral phase mask, we observe a reduction of three times in the photobleaching rate, while the fluorescence intensity decreases by 20%, in comparison to the values obtained with a Fourier-transform-limited pulse. These results demonstrate an interesting trade-off between photobleaching rate and nonlinear fluorescence intensity. The possible mechanism behind this process is discussed in terms of the pulse spectral profile and the absorbance band of MEH-PPV. (C) 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a novel way to combine different observation models in a particle filter framework. This, so called, auto-adjustable observation model, enhance the particle filter accuracy when the tracked objects overlap without infringing a great runtime penalty to the whole tracking system. The approach has been tested under two important real world situations related to animal behavior: mice and larvae tracking. The proposal was compared to some state-of-art approaches and the results show, under the datasets tested, that a good trade-off between accuracy and runtime can be achieved using an auto-adjustable observation model. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The introduction of a new technology High Speed Downlink Packet Access (HSDPA) in the Release 5 of the 3GPP specifications raises the question about its performance capabilities. HSDPA is a promising technology which gives theoretical rates up to 14.4 Mbits. The main objective of this thesis is to discuss the system level performance of HSDPAMainly the thesis exploration focuses on the Packet Scheduler because it is the central entity of the HSDPA design. Due to its function, the Packet Scheduler has a direct impact on the HSDPA system performance. Similarly, it also determines the end user performance, and more specifically the relative performance between the users in the cell.The thesis analyzes several Packet Scheduling algorithms that can optimize the trade-off between system capacity and end user performance for the traffic classes targeted in this thesis.The performance evaluation of the algorithms in the HSDPA system are carried out under computer aided simulations that are assessed under realistic conditions to predict the results as precise on the algorithms efficiency. The simulation of the HSDPA system and the algorithms are coded in C/C++ language

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To compare results from various tapping tests with diary responses in advanced PD. Background: A home environment test battery for assessing patient state in advanced PD, consisting of diary assessments and motor tests was constructed for a hand computer with touch screen and mobile communication. The diary questions: 1. walking, 2. time in off , on and dyskinetic states, 3. off at worst, 4. dyskinetic at worst, 5. cramps, and 6. satisfied with function, relate to the recent past. Question 7, self-assessment, allows seven steps from -3 ( very off ) to +3 ( very dyskinetic ) and relate to right now. Tapping tests outline: 8. Alternately tapping two fields (un-cued) with right hand 9. Same as 8 but using left hand 10. Tapping an active field (out of two) following a system-generated rhythm (increasing speed) with the dominant hand 11. Tapping an active field (out of four) that randomly changes location when tapped using the dominant hand Methods: 65 patients (currently on Duodopa, or candidates for this treatment) entered diary responses and performed tapping tests four times per day during one to six periods of seven days length. In total there were 224 test periods and 6039 test occasions. Speed for tapping test 10 was discardedand tests 8 and 9 were combined by taking means. Descriptive statistics were used to present the variation of the test variables in relation to self assessment (question 7). Pearson correlation coefficients between speed and accuracy (percent correct) in tapping tests and diary responses were calculated. Results: Mean compliance (percentage completed test occasions per test period) was 83% and the median was 93%. There were large differences in both mean tapping speed and accuracy between the different self-assessed states. Correlations between diary responses and tapping results were small (-0.2 to 0.3, negative values for off-time and dyskinetic-time that had opposite scale directions). Correlations between tapping results were all positive (0.1 to 0.6). Conclusions: The diary responses and tapping results provided different information. The low correlations can partly be explained by the fact that questions related to the past and by random variability, which could be reduced by taking means over test periods. Both tapping speed and accuracy reflect the motor function of the patient to a large extent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As competition for applicants, legislative focus on graduation rates, and questions about the lifetime value of a degree intensify, many institutions are blurring boundaries between academic advising and co-curricular and career advising to promote student success and differentiate brand. This report examines how leaders break the trade-off between high-touch service and budget realities, identifying breakthrough practices, as well as the models and technologies required to deliver them in a cost-effective manner.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O tema da presente tese é a demanda do gás natural em um novo mercado, que se encontra em expansão.O objetivo geral é o desenvolvimento de uma metodologia para a previsão de demanda do gás natural, estabelecendo um método que integre as diversas alternativas de previsão de demanda desenvolvidas. Primeiramente, é feita uma revisão da literatura abordando os seguintes temas: (i) demanda de energia no Brasil e no mundo; (ii) demanda de gás natural no Brasil e no mundo; (iii) oferta de gás natural no Rio Grande do Sul; (iv) modelos de previsão de demanda energética; (v) pesquisa qualitativa e grupos focados. São, então, desenvolvidas as alternativas para a previsão de demanda de gás natural: (i) baseado em dados históricos do Rio Grande do Sul: com base no comportamento pregresso da demanda energética estadual faz-se uma extrapolação dos dados de demanda futuros, estimando-se um percentual de participação do gás natural neste mercado; (ii) baseado em equações de previsão que se apóiam em dados sócio-econômicos: tomando-se como embasamento o tamanho da população, PIB, número de veículos da frota do estado, e as respectivas taxas de crescimento de cada uma destas variáveis, estima-se o potencial consumo de gás natural (iii) baseado em dados históricos de outros países: tomando-se por base os dados de países onde já se encontra consolidado o mercado de gás natural, faz-se uma analogia ao caso brasileiro, particularmente o estado do Rio Grande do Sul, visualizando o posicionamento deste mercado frente à curva de crescimento e amadurecimento do mercado consolidado; (iv) baseado na opinião dos clientes potenciais: através de grupos focados, busca-se a compreensão das variáveis que influenciam as decisões dos consumidores de energia, bem como a compreensão das soluções de compromisso (trade off) realizadas quando da escolha dos diferentes energéticos, utilizando-se técnicas do tipo “preferência declarada”; (v) baseado na opinião de especialistas: através de grupos focados com profissionais do setor energético, economistas, engenheiros e administradores públicos busca-se o perfil de demanda esperado para o gás natural. São aplicadas as alternativas individuais à previsão da demanda do gás natural no estado do Rio Grande do Sul, verificando a necessidade de adaptações ou desenvolvimentos adicionais das abordagens individuais. Neste momento, começa-se a construção do método integrador, partindo-se da visualização de benefícios e carências apresentados por cada alternativa individual. É, então, elaborada uma proposta para integrar os resultados das diversas abordagens. Trata-se da construção de um método para a previsão de demanda energética de gás natural que compatibiliza resultados qualitativos e quantitativos gerados nas abordagens individuais. O método parte de diferentes inputs, ou seja, os dados de saída gerados por cada abordagem individual, chegando a um output único otimizado em relação à condição inicial. A fase final é a aplicação do método proposto à previsão de demanda de gás natural no estado do Rio Grande do Sul, utilizando a base de dados gerada para o estudo particular do estado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We model the trade-off between the balance and the strength of incentives implicit in the choice between hierarchical and matrix or- ganizational structures. We show that managerial biases determine which structure is optimal: hierarchical forms are preferred when biases are low, while matrix structures are preferred when biases are high. Moreover, the results show that there is always a level of bias for which matrix design can achieve the expected profit obtained by shareholders if they could directly control the firm. We also show that the main trade-off, i.e., hierarchical versus matrix structure is preserved under asymmetric levels of bias among managers and when low-level workers perceive activities with complementary efforts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho busca explorar, através de testes empíricos, qual das duas principais teorias de escolha de estrutura ótima de capital das empresas, a Static Trade-off Theory (STT) ou a Pecking Order Theory(POT) melhor explica as decisões de financiamento das companhias brasileiras. Adicionalmente, foi estudado o efeito da assimetria de informações, desempenho e liquidez do mercado acionário nessas decisões. Utilizou-se no presente trabalho métodos econométricos com dados de empresas brasileiras de capital aberto no período abrangendo 1995 a 2005, testando dois modelos representativos da Static Trade-off Theory (STT) e da Pecking Order Theory(POT). Inicialmente, foi testado o grupo amplo de empresas e, posteriormente, realizou-se o teste em subgrupos, controlando os efeitos de desempenho e liquidez do mercado acionário, liquidez das ações das empresas tomadoras e assimetria de informações. Desta forma, os resultados obtidos são indicativos de que a Pecking Order Theory, na sua forma semi-forte, se constitui na melhor teoria explicativa quanto à escolha da estrutura de capital das empresas brasileiras, na qual a geração interna de caixa e o endividamento oneroso e operacional é a fonte prioritária de recursos da companhia, havendo algum nível, embora baixo, da utilização de emissão de ações. Os estudos empíricos para os subgrupos de controle sugerem que a liquidez do mercado e liquidez das ações das empresas são fatores de influência na propensão das empresas emitirem ações, assim como a assimetria de informação. O desempenho do mercado acionário, com base nos dados analisados, aparenta ter pouca influência na captação de recursos via emissões de ações das empresas, não sendo feito no presente estudo distinções entre emissões públicas ou privadas

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We construct and simulate a model to study the welfare and macroeconomic impact of government actions when its productive role is taken into account. The trade-off between public investment and public consumption is also investigated, since public consumption is introduced as a public good that directly affects individuals' well-being. Our results replicate econometric evidence showing that part of the observed slowdown of U.S. productivity growth can be explained by the reduction of investment in infrastructure which also implied a sizable welfare 1085 to the popu1ation. Depending on the methodology used we found a welfare cost ranging from 4.2% to 1.16% of GNP. The impact of fiscal policy can be qualitative and quantitative distinct depending on Whether we assume a higher or smaller output elasticity to infrastructure. If it is high enough, increases in tax rates may stimulate accumulation and production, which is the opposite prediction of standard ncocJassica1 models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we analyze the optimality of allowing firms to observe signals of workers’ characteristics in an optimal taxation framework. We show that it is always optimal to prohibit signals that disclose information about differences in the intrinsic productivities of workers like mandatory health exams and IQ tests, for example. On the other hand, it is never optimal to forbid signals that reveal information about the comparative advantages of workers like their specialization and profession. When signals are mixed (they disclose both types of information), there is a trade-off between efficiency and equity. It is optimal to prohibit signals with sufficiently low comparative advantage content.