966 resultados para App predictions
Resumo:
All crop models, whether site-specific or global-gridded and regardless of crop, simulate daily crop transpiration and soil evaporation during the crop life cycle, resulting in seasonal crop water use. Modelers use several methods for predicting daily potential evapotranspiration (ET), including FAO-56, Penman-Monteith, Priestley-Taylor, Hargreaves, full energy balance, and transpiration water efficiency. They use extinction equations to partition energy to soil evaporation or transpiration, depending on leaf area index. Most models simulate soil water balance and soil-root water supply for transpiration, and limit transpiration if water uptake is insufficient, and thereafter reduce dry matter production. Comparisons among multiple crop and global gridded models in the Agricultural Model Intercomparison and Improvement Project (AgMIP) show surprisingly large differences in simulated ET and crop water use for the same climatic conditions. Model intercomparisons alone are not enough to know which approaches are correct. There is an urgent need to test these models against field-observed data on ET and crop water use. It is important to test various ET modules/equations in a model platform where other aspects such as soil water balance and rooting are held constant, to avoid compensation caused by other parts of models. The CSM-CROPGRO model in DSSAT already has ET equations for Priestley-Taylor, Penman-FAO-24, Penman-Monteith-FAO-56, and an hourly energy balance approach. In this work, we added transpiration-efficiency modules to DSSAT and AgMaize models and tested the various ET equations against available data on ET, soil water balance, and season-long crop water use of soybean, fababean, maize, and other crops where runoff and deep percolation were known or zero. The different ET modules created considerable differences in predicted ET, growth, and yield.
Resumo:
El mundo de la web admite actualmente los productos desarrollados tanto por desarrolladores profesionales como por usuarios finales con un conocimiento más limitado. A pesar de la diferencia que se puede suponer de calidad entre los productos de ambos, las dos soluciones pueden ser reconocidas y empleadas en una aplicación. En la Web 2.0, este comportamiento se observa en el desarrollo de componentes web. Lo que se persigue en el trabajo es desarrollar un modelo de persistencia que, apoyado por un lado servidor y por uno cliente, recoja las métricas de calidad de los componentes cuando los usuarios interaccionan con ellos. A partir de estas métricas, es posible mejorar la calidad de estos componentes. La forma en la que se van a recoger las métricas es a través de PicBit, la aplicación desarrollada para que los usuarios puedan interconectar diferentes componentes entre ellos sin restricciones, de forma que tras interactuar con ellos puedan expresar su grado de satisfacción, que se recoge para la evaluación de la calidad. Se definen también unas métricas intrínsecas al componente, no determinadas por el usuario y que sirven como referencia de la evaluación. Cuando se tienen tanto las métricas intrínsecas como procedentes del usuario, se realiza una correlación entre ellas que permite analizar las posibles desviaciones entre ellas y determinar la calidad propia del componente. Las conclusiones que se pueden obtener del trabajo es que cuando los usuarios pueden realizar pruebas de usabilidad de forma libre, sin restricciones, es mayor la posibilidad de obtener resultados favorables porque estos resultados muestran cómo usará un usuario final la aplicación. Este método de trabajo se ve favorecido por el número de herramientas que se pueden utilizar hoy para monitorizar el flujo de usuario en el servicio.---ABSTRACT---Nowadays, the web world deals with products developed both by professional developers and by end-users with some limited knowledge. Although the difference between both can be important in quality terms, both are accepted and included in web applications. In web 2.0, this behavior can be recognized in the web components development. The goal pursued in the work presented is to create a persistent model that, supported by an end and a back side, will pick the quality measures of the components when the users interact with them. These measures are the starting point for improving the components. The way in which the measures are going to be picked is through PicBit, the application we have developed in order to allow the users playing with the components without restrictions or rules, so after the interaction they can give their satisfaction mark with the application. This will be the value used to evaluate the quality. Some own measures are also defined, which does not depend on the user and which will be used as a reference point of the evaluation. When the measures from users and own ones are got, their correlation is analyzed to study the differences between them and to establish the quality of the component. The conclusion that can be gained from the project is the importance of giving freedom for users when doing usability tests because it increases the chance to get positive results, in the way the users execute the operations they want with the application. This method is fortunate for having such a number of tools to monitor the user flow when using the service.
Resumo:
La web vive un proceso de cambio constante, basado en una interacción mayor del usuario. A partir de la actual corriente de paradigmas y tecnologías asociadas a la web 2.0, han surgido una serie de estándares de gran utilidad, que cubre la necesidad de los desarrollos actuales de la red. Entre estos se incluyen los componentes web, etiquetas HTML definidas por el usuario que cubren una función concreta dentro de una página. Existe una necesidad de medir la calidad de dichos desarrollos, para discernir si el concepto de componente web supone un cambio revolucionario en el desarrollo de la web 2.0. Para ello, es necesario realizar una explotación de componentes web, considerada como la medición de calidad basada en métricas y definición de un modelo de interconexión de componentes. La plataforma PicBit surge como respuesta a estas cuestiones. Consiste en una plataforma social de construcción de perfiles basada en estos elementos. Desde la perspectiva del usuario final se trata de una herramienta para crear perfiles y comunidades sociales, mientras que desde una perspectiva académica, la plataforma consiste en un entorno de pruebas o sandbox de componentes web. Para ello, será necesario implementar el extremo servidor de dicha plataforma, enfocado a la labor de explotación, por medio de la definición de una interfaz REST de operaciones y un sistema para la recolección de eventos de usuario en la plataforma. Gracias a esta plataforma se podrán discernir qué parámetros influyen positivamente en la experiencia de uso de un componente, así como descubrir el futuro potencial de este tipo de desarrollos.---ABSTRACT---The web evolves into a more interactive platform. From the actual version of the web, named as web 2.0, many paradigms and standards have arisen. One of those standards is web components, a set of concepts to define new HTML tags that covers a specific function inside a web page. It is necessary to measure the quality of this kind of software development, and the aim behind this approach is to determine if this new set of concepts would survive in the actual web paradigm. To achieve this, it is described a model to analyse components, in the terms of quality measure and interconnection model description. PicBit consists of a social platform to use web components. From the point of view of the final user, this platform is a tool to create social profiles using components, whereas from the point of view of technicians, it consists of a sandbox of web components. Thanks to this platform, we will be able to discover those parameters that have a positive effect in the user experience and to discover the potential of this new set of standards into the web 2.0.
Resumo:
A panel method free-wake model to analyse the rotor flapping is presented. The aerodynamic model consists of a panel method, which takes into account the three-dimensional rotor geometry, and a free-wake model, to determine the wake shape. The main features of the model are the wake division into a near-wake sheet and a far wake represented by a single tip vortex, and the modification of the panel method formulation to take into account this particular wake description. The blades are considered rigid with a flap degree of freedom. The problem solution is approached using a relaxation method, which enforces periodic boundary conditions. Finally, several code validations against helicopter and wind turbine experimental data are performed, showing good agreement
Resumo:
We have identified a novel β amyloid precursor protein (βAPP) mutation (V715M-βAPP770) that cosegregates with early-onset Alzheimer’s disease (AD) in a pedigree. Unlike other familial AD-linked βAPP mutations reported to date, overexpression of V715M-βAPP in human HEK293 cells and murine neurons reduces total Aβ production and increases the recovery of the physiologically secreted product, APPα. V715M-βAPP significantly reduces Aβ40 secretion without affecting Aβ42 production in HEK293 cells. However, a marked increase in N-terminally truncated Aβ ending at position 42 (x-42Aβ) is observed, whereas its counterpart x-40Aβ is not affected. These results suggest that, in some cases, familial AD may be associated with a reduction in the overall production of Aβ but may be caused by increased production of truncated forms of Aβ ending at the 42 position.
Resumo:
Peer reviewed
Resumo:
Inteins are protein-splicing elements, most of which contain conserved sequence blocks that define a family of homing endonucleases. Like group I introns that encode such endonucleases, inteins are mobile genetic elements. Recent crystallography and computer modeling studies suggest that inteins consist of two structural domains that correspond to the endonuclease and the protein-splicing elements. To determine whether the bipartite structure of inteins is mirrored by the functional independence of the protein-splicing domain, the entire endonuclease component was deleted from the Mycobacterium tuberculosis recA intein. Guided by computer modeling studies, and taking advantage of genetic systems designed to monitor intein function, the 440-aa Mtu recA intein was reduced to a functional mini-intein of 137 aa. The accuracy of splicing of several mini-inteins was verified. This work not only substantiates structure predictions for intein function but also supports the hypothesis that, like group I introns, mobile inteins arose by an endonuclease gene invading a sequence encoding a small, functional splicing element.
Resumo:
Variability in population growth rate is thought to have negative consequences for organism fitness. Theory for matrix population models predicts that variance in population growth rate should be the sum of the variance in each matrix entry times the squared sensitivity term for that matrix entry. I analyzed the stage-specific demography of 30 field populations from 17 published studies for pattern between the variance of a demographic term and its contribution to population growth. There were no instances in which a matrix entry both was highly variable and had a large effect on population growth rate; instead, correlations between estimates of temporal variance in a term and contribution to population growth (sensitivity or elasticity) were overwhelmingly negative. In addition, survivorship or growth sensitivities or elasticities always exceeded those of fecundity, implying that the former two terms always contributed more to population growth rate. These results suggest that variable life history stages tend to contribute relatively little to population growth rates because natural selection may alter life histories to minimize stages with both high sensitivity and high variation.
Resumo:
This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data.
Resumo:
Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (high-top') and models that do not (low-top'). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (December-March) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño/Southern Oscillation (ENSO) and the Quasi-Biennial Oscillation (QBO)) and how they relate to predictive skill on intraseasonal to seasonal time-scales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.
Resumo:
GPLSI Compendium App se trata de una aplicación móvil que hará las veces de un sistema de gestión y difusión de contenidos digitales, en la que el usuario podrá realizar resúmenes de textos de diferentes webs, tales como artículos o noticias, y posteriormente compartir esos resúmenes en sus redes sociales o a través del correo. Se le proporcionarán varios métodos de resumen, así como las descripciones y un pequeño tutorial de cómo utilizar la aplicación.
Resumo:
Book of yearly predictions about the sultan, his family, ministers, grand mufti, etc. Records for years 1199-1227 AH [1785-1812 AD]. Year run from nevrūz to nevrūz (beginning of spring). Predictions concern political affairs and state of health of various individuals. Predictions about weather conditions and the eclipses also included. Separate section at end of each year's entry breaks down predictions into months. Manuscript apparently the author's working copy and probably the sole copy.