875 resultados para Value-based selling
Resumo:
Las metodologías de desarrollo ágiles han sufrido un gran auge en entornos industriales durante los últimos años debido a la rapidez y fiabilidad de los procesos de desarrollo que proponen. La filosofía DevOps y específicamente las metodologías derivadas de ella como Continuous Delivery o Continuous Deployment promueven la gestión completamente automatizada del ciclo de vida de las aplicaciones, desde el código fuente a las aplicaciones ejecutándose en entornos de producción. La automatización se ve como un medio para producir procesos repetibles, fiables y rápidos. Sin embargo, no todas las partes de las metodologías Continuous están completamente automatizadas. En particular, la gestión de la configuración de los parámetros de ejecución es un problema que ha sido acrecentado por la elasticidad y escalabilidad que proporcionan las tecnologías de computación en la nube. La mayoría de las herramientas de despliegue actuales pueden automatizar el despliegue de la configuración de parámetros de ejecución, pero no ofrecen soporte a la hora de fijar esos parámetros o de validar los ficheros que despliegan, principalmente debido al gran abanico de opciones de configuración y el hecho de que el valor de muchos de esos parámetros es fijado en base a preferencias expresadas por el usuario. Esto hecho hace que pueda parecer que cualquier solución al problema debe estar ajustada a una aplicación específica en lugar de ofrecer una solución general. Con el objetivo de solucionar este problema, propongo un modelo de configuración que puede ser inferido a partir de instancias de configuración existentes y que puede reflejar las preferencias de los usuarios para ser usado para facilitar los procesos de configuración. El modelo de configuración puede ser usado como la base de un proceso de configuración interactivo capaz de guiar a un operador humano a través de la configuración de una aplicación para su despliegue en un entorno determinado o para detectar cambios de configuración automáticamente y producir una configuración válida que se ajuste a esos cambios. Además, el modelo de configuración debería ser gestionado como si se tratase de cualquier otro artefacto software y debería ser incorporado a las prácticas de gestión habituales. Por eso también propongo un modelo de gestión de servicios que incluya información relativa a la configuración de parámetros de ejecución y que además es capaz de describir y gestionar propuestas arquitectónicas actuales tales como los arquitecturas de microservicios. ABSTRACT Agile development methodologies have risen in popularity within the industry in recent years due to the speed and reliability of the processes they propose. The DevOps philosophy and specifically the methodologies derived from it such as Continuous Delivery and Continuous Deployment push for a totally automated management of the application lifecycle, from the source code to the software running in production environment. Automation in this regard is used as a means to produce repeatable, reliable and fast processes. However, not all parts of the Continuous methodologies are completely automatized. In particular, management of runtime parameter configuration is a problem that has increased its impact in deployment process due to the scalability and elasticity provided by cloud technologies. Most deployment tools nowadays can automate the deployment of runtime parameter configuration, but they offer no support for parameter setting o configuration validation, as the range of different configuration options and the fact that the value of many of those parameters is based on user preference seems to imply that any solution to the problem will have to be tailored to a specific application. With the aim to solve this problem I propose a configuration model that can be inferred from existing configurations and reflect user preferences in order to ease the configuration process. The configuration model can be used as the base of an interactive configuration process capable of guiding a human operator through the configuration of an application for its deployment in a specific environment or to automatically detect configuration changes and produce valid runtime parameter configurations that take into account those changes. Additionally, the configuration model should be managed as any other software artefact and should be incorporated into current management practices. I also propose a service management model that includes the configuration information and that is able to describe and manage current architectural practices such as the microservices architecture.
Resumo:
Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.
Resumo:
Esta Tesis se centra en el desarrollo de un método para la reconstrucción de bases de datos experimentales incompletas de más de dos dimensiones. Como idea general, consiste en la aplicación iterativa de la descomposición en valores singulares de alto orden sobre la base de datos incompleta. Este nuevo método se inspira en el que ha servido de base para la reconstrucción de huecos en bases de datos bidimensionales inventado por Everson y Sirovich (1995) que a su vez, ha sido mejorado por Beckers y Rixen (2003) y simultáneamente por Venturi y Karniadakis (2004). Además, se ha previsto la adaptación de este nuevo método para tratar el posible ruido característico de bases de datos experimentales y a su vez, bases de datos estructuradas cuya información no forma un hiperrectángulo perfecto. Se usará una base de datos tridimensional de muestra como modelo, obtenida a través de una función transcendental, para calibrar e ilustrar el método. A continuación se detalla un exhaustivo estudio del funcionamiento del método y sus variantes para distintas bases de datos aerodinámicas. En concreto, se usarán tres bases de datos tridimensionales que contienen la distribución de presiones sobre un ala. Una se ha generado a través de un método semi-analítico con la intención de estudiar distintos tipos de discretizaciones espaciales. El resto resultan de dos modelos numéricos calculados en C F D . Por último, el método se aplica a una base de datos experimental de más de tres dimensiones que contiene la medida de fuerzas de una configuración ala de Prandtl obtenida de una campaña de ensayos en túnel de viento, donde se estudiaba un amplio espacio de parámetros geométricos de la configuración que como resultado ha generado una base de datos donde la información está dispersa. ABSTRACT A method based on an iterative application of high order singular value decomposition is derived for the reconstruction of missing data in multidimensional databases. The method is inspired by a seminal gappy reconstruction method for two-dimensional databases invented by Everson and Sirovich (1995) and improved by Beckers and Rixen (2003) and Venturi and Karniadakis (2004). In addition, the method is adapted to treat both noisy and structured-but-nonrectangular databases. The method is calibrated and illustrated using a three-dimensional toy model database that is obtained by discretizing a transcendental function. The performance of the method is tested on three aerodynamic databases for the flow past a wing, one obtained by a semi-analytical method, and two resulting from computational fluid dynamics. The method is finally applied to an experimental database consisting in a non-exhaustive parameter space measurement of forces for a box-wing configuration.
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.
Resumo:
Nucleic acid sequence-based amplification (NASBA) has proved to be an ultrasensitive method for HIV-1 diagnosis in plasma even in the primary HIV infection stage. This technique was combined with fluorescence correlation spectroscopy (FCS) which enables online detection of the HIV-1 RNA molecules amplified by NASBA. A fluorescently labeled DNA probe at nanomolar concentration was introduced into the NASBA reaction mixture and hybridizing to a distinct sequence of the amplified RNA molecule. The specific hybridization and extension of this probe during amplification reaction, resulting in an increase of its diffusion time, was monitored online by FCS. As a consequence, after having reached a critical concentration of 0.1–1 nM (threshold for unaided FCS detection), the number of amplified RNA molecules in the further course of reaction could be determined. Evaluation of the hybridization/extension kinetics allowed an estimation of the initial HIV-1 RNA concentration that was present at the beginning of amplification. The value of initial HIV-1 RNA number enables discrimination between positive and false-positive samples (caused for instance by carryover contamination)—this possibility of discrimination is an essential necessity for all diagnostic methods using amplification systems (PCR as well as NASBA). Quantitation of HIV-1 RNA in plasma by combination of NASBA with FCS may also be useful in assessing the efficacy of anti-HIV agents, especially in the early infection stage when standard ELISA antibody tests often display negative results.
Resumo:
In human beings of both sexes, dehydroepiandrosterone sulfate (DHEAS) circulating in blood is mostly an adrenally secreted steroid whose serum concentration (in the micromolar range and 30–50% higher in men than in women) decreases with age, toward ≈20–10% of its value in young adults during the 8th and 9th decades. The mechanism of action of DHEA and DHEAS is poorly known and may include partial transformation into sex steroids, increase of bioavailable insulin-like growth factor I, and effects on neurotransmitter receptors. Whether there is a cause-to-effect relationship between the decreasing levels of DHEAS with age and physiological and pathological manifestations of aging is still undecided, but this is of obvious theoretical and practical interest in view of the easy restoration by DHEA administration. Here we report on 622 subjects over 65 years of age, studied for the 4 years since DHEAS baseline values had been obtained, in the frame of the PAQUID program, analyzing the functional, psychological, and mental status of a community-based population in the south-west of France. We confirm the continuing decrease of DHEAS serum concentration with age, more in men than in women, even if men retain higher levels. Significantly lower values of baseline DHEAS were recorded in women in cases of functional limitation (Instrumental Activities of Daily Living), confinement, dyspnea, depressive symptomatology, poor subjective perception of health and life satisfaction, and usage of various medications. In men, there was a trend for the same correlations, even though not statistically significant in most categories. No differences in DHEAS levels were found in cases of incident dementia in the following 4 years. In men (but not in women), lower DHEAS was significantly associated with increased short-term mortality at 2 and 4 years after baseline measurement. These results, statistically established by taking into account corrections for age, sex, and health indicators, suggest the need for further careful trials of the administration of replacement doses of DHEA in aging humans. Indeed, the first noted results of such “treatment” are consistent with correlations observed here between functional and psychological status and endogenous steroid serum concentrations.
Resumo:
Carotenoid pigments in plants fulfill indispensable functions in photosynthesis. Carotenoids that accumulate as secondary metabolites in chromoplasts provide distinct coloration to flowers and fruits. In this work we investigated the genetic mechanisms that regulate accumulation of carotenoids as secondary metabolites during ripening of tomato fruits. We analyzed two mutations that affect fruit pigmentation in tomato (Lycopersicon esculentum): Beta (B), a single dominant gene that increases β-carotene in the fruit, and old-gold (og), a recessive mutation that abolishes β-carotene and increases lycopene. Using a map-based cloning approach we cloned the genes B and og. Molecular analysis revealed that B encodes a novel type of lycopene β-cyclase, an enzyme that converts lycopene to β-carotene. The amino acid sequence of B is similar to capsanthin-capsorubin synthase, an enzyme that produces red xanthophylls in fruits of pepper (Capsicum annum). Our results prove that β-carotene is synthesized de novo during tomato fruit development by the B lycopene cyclase. In wild-type tomatoes B is expressed at low levels during the breaker stage of ripening, whereas in the Beta mutant its transcription is dramatically increased. Null mutations in the gene B are responsible for the phenotype in og, indicating that og is an allele of B. These results confirm that developmentally regulated transcription is the major mechanism that governs lycopene accumulation in ripening fruits. The cloned B genes can be used in various genetic manipulations toward altering pigmentation and enhancing nutritional value of plant foods.
Resumo:
Objective: To determine how patients with lung cancer value the trade off between the survival benefit of chemotherapy and its toxicities.
Resumo:
The reactivation of telomerase activity in most cancer cells supports the concept that telomerase is a relevant target in oncology, and telomerase inhibitors have been proposed as new potential anticancer agents. The telomeric G-rich single-stranded DNA can adopt in vitro an intramolecular quadruplex structure, which has been shown to inhibit telomerase activity. We used a fluorescence assay to identify molecules that stabilize G-quadruplexes. Intramolecular folding of an oligonucleotide with four repeats of the human telomeric sequence into a G-quadruplex structure led to fluorescence excitation energy transfer between a donor (fluorescein) and an acceptor (tetramethylrhodamine) covalently attached to the 5′ and 3′ ends of the oligonucleotide, respectively. The melting of the G-quadruplex was monitored in the presence of putative G-quadruplex-binding molecules by measuring the fluorescence emission of the donor. A series of compounds (pentacyclic crescent-shaped dibenzophenanthroline derivatives) was shown to increase the melting temperature of the G-quadruplex by 2–20°C at 1 μM dye concentration. This increase in Tm value was well correlated with an increase in the efficiency of telomerase inhibition in vitro. The best telomerase inhibitor showed an IC50 value of 28 nM in a standard telomerase repeat amplification protocol assay. Fluorescence energy transfer can thus be used to reveal the formation of four-stranded DNA structures, and its stabilization by quadruplex-binding agents, in an effort to discover new potent telomerase inhibitors.
Resumo:
As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.
Resumo:
Leukotriene A4 (LTA4) hydrolase [(7E,9E,11Z,14Z)-(5S,6S)-5,6-epoxyicosa-7, 9,11,14-tetraenoate hydrolase; EC 3.3.2.6] is a bifunctional zinc metalloenzyme that catalyzes the final step in the biosynthesis of the potent chemotactic agent leukotriene B4 (LTB4). LTA4 hydrolase/aminopeptidase is suicide inactivated during catalysis via an apparently mechanism-based irreversible binding of LTA4 to the protein in a 1:1 stoichiometry. Previously, we have identified a henicosapeptide, encompassing residues Leu-365 to Lys-385 in human LTA4 hydrolase, which contains a site involved in the covalent binding of LTA4 to the native enzyme. To investigate the role of Tyr-378, a potential candidate for this binding site, we exchanged Tyr for Phe or Gln in two separate mutants. In addition, each of two adjacent and potentially reactive residues, Ser-379 and Ser-380, were exchanged for Ala. The mutated enzymes were expressed as (His)6-tagged fusion proteins in Escherichia coli, purified to apparent homogeneity, and characterized. Enzyme activity determinations and differential peptide mapping, before and after repeated exposure to LTA4, revealed that wild-type enzyme and the mutants [S379A] and [S380A]LTA4hydrolase were equally susceptible to suicide inactivation whereas the mutants in position 378 were no longer inactivated or covalently modified by LTA4. Furthermore, in [Y378F]LTA4 hydrolase, the value of kcat for epoxide hydrolysis was increased 2.5-fold over that of the wild-type enzyme. Thus, by a single-point mutation in LTA4 hydrolase, catalysis and covalent modification/inactivation have been dissociated, yielding an enzyme with increased turnover and resistance to mechanism-based inactivation.
Resumo:
Grazed pastures are the backbone of the Brazilian livestock industry and grasses of the genus Brachiaria (syn. Urochloa) are some of most used tropical forages in the country. Although the dependence on the forage resource is high, grazing management is often empirical and based on broad and non-specific guidelines. Mulato II brachiariagrass (Convert HD 364, Dow AgroSciences, São Paulo, Brazil) (B. brizantha × B. ruziziensis × B. decumbens), a new Brachiaria hybrid, was released as an option for a broad range of environmental conditions. There is no scientific information on specific management practices for Mulato II under continuous stocking in Brazil. The objectives of this research were to describe and explain variations in carbon assimilation, herbage accumulation (HA), plant-part accumulation, nutritive value, and grazing efficiency (GE) of Mulato II brachiariagrass as affected by canopy height and growth rate, the latter imposed by N fertilization rate, under continuous stocking. An experiment was carried out in Piracicaba, SP, Brazil, during two summer grazing seasons. The experimental design was a randomized complete block, with a 3 x 2 factorial arrangement, corresponding to three steady-state canopy heights (10, 25 and 40 cm) maintained by mimicked continuous stocking and two growth rates (imposed as 50 and 250 kg N ha-1 yr-1), with three replications. There were no height × N rate interactions for most of the responses studied. The HA of Mulato II increased linearly (8640 to 13400 kg DM ha-1 yr-1), the in vitro digestible organic matter (IVDOM) decreased linearly (652 to 586 g kg-1), and the GE decreased (65 to 44%) as canopy height increased. Thus, although GE and IVDOM were greatest at 10 cm height, HA was 36% less for the 10- than for the 40-cm height. The leaf carbon assimilation was greater for the shortest canopy (10 cm), but canopy assimilation was less than in taller canopies, likely a result of less leaf area index (LAI). The reductions in HA, plant-part accumulation, and LAI, were not associated with other signs of stand deterioration. Leaf was the main plant-part accumulated, at a rate that increased from 70 to 100 kg DM ha-1 d-1 as canopy height increased from 10 to 40 cm. Mulato II was less productive (7940 vs. 13380 kg ha-1 yr-1) and had lesser IVDOM (581 vs. 652 g kg-1) at the lower N rate. The increase in N rate affected plant growth, increasing carbon assimilation, LAI, rates of plant-part accumulation (leaf, stem, and dead), and HA. The results indicate that the increase in the rate of dead material accumulation due to more N applied is a result of overall increase in the accumulation rates of all plant-parts. Taller canopies (25 or 40 cm) are advantageous for herbage accumulation of Mulato II, but nutritive value and GE was greater for 25 cm, suggesting that maintaining ∼25-cm canopy height is optimal for continuously stocked Mulato II.
Resumo:
The main objective of this paper is twofold: on the one hand, to analyse the impact that the announcement of the opening of a new hotel has on the performance of its chain by carrying out an event study, and on the other hand, to compare the results of two different approaches to this method: a parametric specification based on the autoregressive conditional heteroskedasticity models to estimate the market model, and a nonparametric approach, which implies employing Theil’s nonparametric regression technique, which in turn, leads to the so-called complete nonparametric approach to event studies. The results that the empirical application arrives at are noteworthy as, on average, the reaction to such news releases is highly positive, both approaches reaching the same level of significance. However, a word of caution must be said when one is not only interested in detecting whether the market reacts, but also in obtaining an exhaustive calculation of the abnormal returns to further examine its determining factors.
Resumo:
Poster presented in the 11th Mediterranean Congress of Chemical Engineering, Barcelona, October 21-24, 2008.
Resumo:
A microwave-based thermal nebulizer (MWTN) has been employed for the first time as on-line preconcentration device in inductively coupled plasma atomic emission spectrometry (ICP-AES). By the appropriate selection of the experimental conditions, the MWTN could be either operated as a conventional thermal nebulizer or as on-line analyte preconcentration and nebulization device. Thus, when operating at microwave power values above 100 W and highly concentrated alcohol solutions, the amount of energy per solvent mass liquid unit (EMR) is high enough to completely evaporate the solvent inside the system and, as a consequence, the analyte is deposited (and then preconcentrated) on the inner walls of the MWTN capillary. When reducing the EMR to the appropriate value (e.g., by reducing the microwave power at a constant sample uptake rate) the retained analyte is swept along by the liquid-gas stream and an analyte-enriched aerosol is generated and next introduced into the plasma cell. Emission signals obtained with the MWTN operating in preconcentration-nebulization mode improved when increasing preconcentration time and sample uptake rate as well as when decreasing the nozzle inner diameter. When running with pure ethanol solution at its optimum experimental conditions, the MWTN in preconcentration-nebulization mode afforded limits of detection up to one order of magnitude lowers than those obtained operating the MWTN exclusively as a nebulizer. To validate the method, the multi-element analysis (i.e. Al, Ca, Cd, Cr, Cu, Fe, K, Mg, Mn, Na, Pb and Zn) of different commercial spirit samples in ICP-AES has been performed. Analyte recoveries for all the elements studied ranged between 93% and 107% and the dynamic linear range covered up to 4 orders of magnitude (i.e. from 0.1 to 1000 μg L−1). In these analysis, both MWTN operating modes afforded similar results. Nevertheless, the preconcentration-nebulization mode permits to determine a higher number of analytes due to its higher detection capabilities.