128 resultados para yields
Resumo:
The species x location interaction was of great importance in explaining the behaviour of genetic material. The study presented here shows, for the first time, the performance, under field conditions of the new tritordeum species, compared to wheat and triticale in a wide range of Mediterranean countries (Spain, Lebanon and Tunisia). The results obtained revealed that despite the diversity of environmental conditions, the main differences in yield were due to genotypes, especially to differences between species. The multi-local study with different growth conditions revealed important information about the water availability effect on yield. In the lowest yielding environments (Tunisia rainfed), Tritordeum and triticale yields were equivalent. However under better growth conditions (Spain), tritordeum yield was shown to be lower than wheat and triticale. Interestingly, when water limitation was extended during the pre-anthesis period, differences in tritordeum versus wheat-triticale yield rate were larger than when water stress occurred during anthesis. These variations were explained by the fact that kernel weight has been found as the limiting factor for yield determination in tritordeum, and a delay in the anthesis date may have been the cause for the low kernel weight and low yield under Mediterranean drought conditions. Such differences in yield between tritordeum and wheat or triticale could be explained by the fact that tritordeum is a relatively new species and far fewer resources have been devoted to its improvement when compared to wheat and triticale. Our results suggest that breeding efforts should be directed to an earlier anthesis date and a longer grain filling period. tritordeum proved to have possibilities to be grown under drought environments as a new crop, since its performance was quite close to wheat and triticale. Besides, it has qualitative added values that may improve farmers' income per unit land.
Estudi de comportaments socials d'aixams robòtics amb aplicació a la neteja d'espais no estructurats
Resumo:
La intel·ligència d’eixams és una branca de la intel·ligència artificial que està agafant molta força en els últims temps, especialment en el camp de la robòtica. En aquest projecte estudiarem el comportament social sorgit de les interaccions entre un nombre determinat de robots autònoms en el camp de la neteja de grans superfícies. Un cop triat un escenari i un robot que s’ajustin als requeriments del projecte, realitzarem una sèrie de simulacions a partir de diferents polítiques de cerca que ens permetran avaluar el comportament dels robots per unes condicions inicials de distribució dels robots i zones a netejar. A partir dels resultats obtinguts serem capaços de determinar quina configuració genera millors resultats.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
This article provides a theoretical and empirical analysis of a firm's optimal R&D strategy choice. In this paper a firm's R&D strategy is assumed to be endogenous and allowed to depend on both internal firms. characteristics and external factors. Firms choose between two strategies, either they engage in R&D or abstain from own R&D and imitate the outcomes of innovators. In the theoretical model this yields three types of equilibria in which either all firms innovate, some firms innovate and others imitate, or no firm innovates. Firms'equilibrium strategies crucially depend on external factors. We find that the efficiency of intellectual property rights protection positively affects firms'incentives to engage in R&D, while competitive pressure has a negative effect. In addition, smaller firms are found to be more likely to become imitators when the product is homogeneous and the level of spillovers is high. These results are supported by empirical evidence for German .rms from manufacturing and services sectors. Regarding social welfare our results indicate that strengthening intellectual property protection can have an ambiguous effect. In markets characterized by a high rate of innovation a reduction of intellectual property rights protection can discourage innovative performance substantially. However, a reduction of patent protection can also increase social welfare because it may induce imitation. This indicates that policy issues such as the optimal length and breadth of patent protection cannot be resolved without taking into account specific market and firm characteristics. Journal of Economic Literature Classification Numbers: C35, D43, L13, L22, O31. Keywords: Innovation; imitation; spillovers; product differentiation; market competition; intellectual property rights protection.
Resumo:
Let A be a simple, separable C*-algebra of stable rank one. We prove that the Cuntz semigroup of C (T, A) is determined by its Murray-von Neumann semigroup of projections and a certain semigroup of lower semicontinuous functions (with values in the Cuntz semigroup of A). This result has two consequences. First, specializing to the case that A is simple, finite, separable and Z-stable, this yields a description of the Cuntz semigroup of C (T, A) in terms of the Elliott invariant of A. Second, suitably interpreted, it shows that the Elliott functor and the functor defined by the Cuntz semigroup of the tensor product with the algebra of continuous functions on the circle are naturally equivalent.
Resumo:
Tropical cyclones are affected by a large number of climatic factors, which translates into complex patterns of occurrence. The variability of annual metrics of tropical-cyclone activity has been intensively studied, in particular since the sudden activation of the North Atlantic in the mid 1990’s. We provide first a swift overview on previous work by diverse authors about these annual metrics for the North-Atlantic basin, where the natural variability of the phenomenon, the existence of trends, the drawbacks of the records, and the influence of global warming have been the subject of interesting debates. Next, we present an alternative approach that does not focus on seasonal features but on the characteristics of single events [Corral et al., Nature Phys. 6, 693 (2010)]. It is argued that the individual-storm power dissipation index (PDI) constitutes a natural way to describe each event, and further, that the PDI statistics yields a robust law for the occurrence of tropical cyclones in terms of a power law. In this context, methods of fitting these distributions are discussed. As an important extension to this work we introduce a distribution function that models the whole range of the PDI density (excluding incompleteness effects at the smallest values), the gamma distribution, consisting in a powerlaw with an exponential decay at the tail. The characteristic scale of this decay, represented by the cutoff parameter, provides very valuable information on the finiteness size of the basin, via the largest values of the PDIs that the basin can sustain. We use the gamma fit to evaluate the influence of sea surface temperature (SST) on the occurrence of extreme PDI values, for which we find an increase around 50 % in the values of these basin-wide events for a 0.49 C SST average difference. Similar findings are observed for the effects of the positive phase of the Atlantic multidecadal oscillation and the number of hurricanes in a season on the PDI distribution. In the case of the El Niño Southern oscillation (ENSO), positive and negative values of the multivariate ENSO index do not have a significant effect on the PDI distribution; however, when only extreme values of the index are used, it is found that the presence of El Niño decreases the PDI of the most extreme hurricanes.
Resumo:
In most psychological tests and questionnaires, a test score is obtained bytaking the sum of the item scores. In virtually all cases where the test orquestionnaire contains multidimensional forced-choice items, this traditionalscoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretabletest scores. Therefore, we propose three alternative scoring methods: a weakand a strict rank preserving scoring method, which both allow an ordinalinterpretation of test scores; and a ratio preserving scoring method, whichallows a proportional interpretation of test scores. Each proposed scoringmethod yields an index for each respondent indicating the degree to whichthe response pattern is inconsistent. Analysis of real data showed that withrespect to rank preservation, the weak and strict rank preserving methodresulted in lower inconsistency indices than the traditional scoring method;with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation
Resumo:
The synthesis of three bidentate, hemilabile phosphine ligands, newly synthesized in the research group (TPOdiphos, DPPrPOdiphos and SODPdiphos), has been up-scaled and optimized. The ligand substitution reaction on Mo(CO)6 and W(CO)6 has been studied and the corresponding complexes fac-[MTPOdiphos(CO)3], fac-[MDPPrPOdiphos(CO)3], and fac-[MSODPdiphos(CO)3], (M= Mo, W) have been isolated in good yields and characterized by NMR, IR and HR MS. In the case of fac- [MoTPOdiphos(CO)3] the XRD crystal structure was resolved. The complexes were found to be octahedral, neutral molecules, with the metal in the zero oxidation state and the ligand adopting a facial P,P,O-coordination. The hard ligand atom (oxygen) is expected to exhibit special features the future applications of these novel ligands.
Resumo:
A time-delayed second-order approximation for the front speed in reaction-dispersion systems was obtained by Fort and Méndez [Phys. Rev. Lett. 82, 867 (1999)]. Here we show that taking proper care of the effect of the time delay on the reactive process yields a different evolution equation and, therefore, an alternate equation for the front speed. We apply the new equation to the Neolithic transition. For this application the new equation yields speeds about 10% slower than the previous one
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
Intuitively, music has both predictable and unpredictable components. In this work we assess this qualitative statement in a quantitative way using common time series models fitted to state-of-the-art music descriptors. These descriptors cover different musical facets and are extracted from a large collection of real audio recordings comprising a variety of musical genres. Our findings show that music descriptor time series exhibit a certain predictability not only for short time intervals, but also for mid-term and relatively long intervals. This fact is observed independently of the descriptor, musical facet and time series model we consider. Moreover, we show that our findings are not only of theoretical relevance but can also have practical impact. To this end we demonstrate that music predictability at relatively long time intervals can be exploited in a real-world application, namely the automatic identification of cover songs (i.e. different renditions or versions of the same musical piece). Importantly, this prediction strategy yields a parameter-free approach for cover song identification that is substantially faster, allows for reduced computational storage and still maintains highly competitive accuracies when compared to state-of-the-art systems.
Resumo:
The aim of this paper is to examine the pros and cons of book and fair value accounting from the perspective of the theory of banking. We consider the implications of the two accounting methods in an overlapping generations environment. As observed by Allen and Gale(1997), in an overlapping generation model, banks have a role as intergenerational connectors as they allow for intertemporal smoothing. Our main result is that when dividends depend on profits, book value ex ante dominates fair value, as it provides better intertemporal smoothing. This is in contrast with the standard view that states that, fair value yields a better allocation as it reflects the real opportunity cost of assets. Banking regulation play an important role by providing the right incentives for banks to smooth intertemporal consumption whereas market discipline improves intratemporal efficiency.
Resumo:
We develop a model of an industry with many heterogeneous firms that face both financing constraints and irreversibility constraints. The financing constraint implies that firms cannot borrow unless the debt is secured by collateral; the irreversibility constraint that they can only sell their fixed capital by selling their business. We use this model to examine the cyclical behavior of aggregate fixed investment, variable capital investment, and output in the presence of persistent idiosyncratic and aggregate shocks. Our model yields three main results. First, the effect of the irreversibility constraint on fixed capital investment is reinforced by the financing constraint. Second, the effect of the financing constraint on variable capital investment is reinforced by the irreversibility constraint. Finally, the interaction between the two constraints is key for explaining why input inventories and material deliveries of US manufacturing firms are so volatile and procyclical, and also why they are highly asymmetrical over the business cycle.