953 resultados para Models and Principles
Resumo:
The North Atlantic spring bloom is one of the main events that lead to carbon export to the deep ocean and drive oceanic uptake of CO(2) from the atmosphere. Here we use a suite of physical, bio-optical and chemical measurements made during the 2008 spring bloom to optimize and compare three different models of biological carbon export. The observations are from a Lagrangian float that operated south of Iceland from early April to late June, and were calibrated with ship-based measurements. The simplest model is representative of typical NPZD models used for the North Atlantic, while the most complex model explicitly includes diatoms and the formation of fast sinking diatom aggregates and cysts under silicate limitation. We carried out a variational optimization and error analysis for the biological parameters of all three models, and compared their ability to replicate the observations. The observations were sufficient to constrain most phytoplankton-related model parameters to accuracies of better than 15 %. However, the lack of zooplankton observations leads to large uncertainties in model parameters for grazing. The simulated vertical carbon flux at 100 m depth is similar between models and agrees well with available observations, but at 600 m the simulated flux is larger by a factor of 2.5 to 4.5 for the model with diatom aggregation. While none of the models can be formally rejected based on their misfit with the available observations, the model that includes export by diatom aggregation has a statistically significant better fit to the observations and more accurately represents the mechanisms and timing of carbon export based on observations not included in the optimization. Thus models that accurately simulate the upper 100 m do not necessarily accurately simulate export to deeper depths.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
The ultimate goals of periodontal therapy remain the complete regeneration of those periodontal tissues lost to the destructive inflammatory-immune response, or to trauma, with tissues that possess the same structure and function, and the re-establishment of a sustainable health-promoting biofilm from one characterized by dysbiosis. This volume of Periodontology 2000 discusses the multiple facets of a transition from therapeutic empiricism during the late 1960s, toward regenerative therapies, which is founded on a clearer understanding of the biophysiology of normal structure and function. This introductory article provides an overview on the requirements of appropriate in vitro laboratory models (e.g. cell culture), of preclinical (i.e. animal) models and of human studies for periodontal wound and bone repair. Laboratory studies may provide valuable fundamental insights into basic mechanisms involved in wound repair and regeneration but also suffer from a unidimensional and simplistic approach that does not account for the complexities of the in vivo situation, in which multiple cell types and interactions all contribute to definitive outcomes. Therefore, such laboratory studies require validatory research, employing preclinical models specifically designed to demonstrate proof-of-concept efficacy, preliminary safety and adaptation to human disease scenarios. Small animal models provide the most economic and logistically feasible preliminary approaches but the outcomes do not necessarily translate to larger animal or human models. The advantages and limitations of all periodontal-regeneration models need to be carefully considered when planning investigations to ensure that the optimal design is adopted to answer the specific research question posed. Future challenges lie in the areas of stem cell research, scaffold designs, cell delivery and choice of growth factors, along with research to ensure appropriate gingival coverage in order to prevent gingival recession during the healing phase.
Resumo:
Vestibular cognition has recently gained attention. Despite numerous experimental and clinical demonstrations, it is not yet clear what vestibular cognition really is. For future research in vestibular cognition, adopting a computational approach will make it easier to explore the underlying mech- anisms. Indeed, most modeling approaches in vestibular science include a top-down or a priori component. We review recent Bayesian optimal observer models, and discuss in detail the conceptual value of prior assumptions, likelihood and posterior estimates for research in vestibular cognition. We then consider forward models in vestibular processing, which are required in order to distinguish between sensory input that is induced by active self-motion, and sensory input that is due to passive self-motion. We suggest that forward models are used not only in the service of estimating sensory states but they can also be drawn upon in an offline mode (e.g., spatial perspective transformations), in which interaction with sensory input is not desired. A computational approach to vestibular cogni- tion will help to discover connections across studies, and it will provide a more coherent framework for investigating vestibular cognition.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.
Resumo:
The selection of a model to guide the understanding and resolution of community problems is an important issue relating to the foundation of public health practice: assessment, policy development, and assurance. Many assessment models produce a diagnosis of community weaknesses, but fail to promote planning and interventions. Rapid Participatory Appraisal (RPA) is a participatory action research model which regards assessment as the first step in the problem solving process, and claims to achieve assessment and policy development within limited resources of time and money. Literature documenting the fulfillment of these claims, and thereby supporting the utility of the model, is relatively sparse and difficult to obtain. Very few articles discuss the changes resulting from RPA assessments in urban areas, and those that do describe studies conducted outside the U.S.A. ^ This study examines the utility of the RPA model and its underlying theories: systems theory, grounded theory, and principles of participatory change, as illustrated by the case study of a community assessment conducted for the Texas Diabetes Institute (TDI), San Antonio, Texas, and subsequent outcomes. Diabetes has a high prevalence and is a major issue in San Antonio. Faculty and students conducted the assessment by informal collaboration between two nursing and public health assessment courses, providing practical student experiences. The study area was large, and the flexibility of the model tested by its use in contiguous sub-regions, reanalyzing aggregated results for the study area. Official TDI reports, and a mail survey of agency employees, described policy development resulting from community diagnoses revealed by the assessment. ^ The RPA model met the criteria for utility from the perspectives of merit, worth, efficiency, and effectiveness. The RPA model best met the agencies' criteria (merit), met the data needs of TDI in this particular situation (worth), provided valid results within budget, time, and personnel constraints (efficiency), and stimulated policy development by TDI (effectiveness). ^ The RPA model appears to have utility for community assessment, diagnosis, and policy development in circumstances similar to the TDI diabetes study. ^
Resumo:
The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^
Resumo:
Complex diseases, such as cancer, are caused by various genetic and environmental factors, and their interactions. Joint analysis of these factors and their interactions would increase the power to detect risk factors but is statistically. Bayesian generalized linear models using student-t prior distributions on coefficients, is a novel method to simultaneously analyze genetic factors, environmental factors, and interactions. I performed simulation studies using three different disease models and demonstrated that the variable selection performance of Bayesian generalized linear models is comparable to that of Bayesian stochastic search variable selection, an improved method for variable selection when compared to standard methods. I further evaluated the variable selection performance of Bayesian generalized linear models using different numbers of candidate covariates and different sample sizes, and provided a guideline for required sample size to achieve a high power of variable selection using Bayesian generalize linear models, considering different scales of number of candidate covariates. ^ Polymorphisms in folate metabolism genes and nutritional factors have been previously associated with lung cancer risk. In this study, I simultaneously analyzed 115 tag SNPs in folate metabolism genes, 14 nutritional factors, and all possible genetic-nutritional interactions from 1239 lung cancer cases and 1692 controls using Bayesian generalized linear models stratified by never, former, and current smoking status. SNPs in MTRR were significantly associated with lung cancer risk across never, former, and current smokers. In never smokers, three SNPs in TYMS and three gene-nutrient interactions, including an interaction between SHMT1 and vitamin B12, an interaction between MTRR and total fat intake, and an interaction between MTR and alcohol use, were also identified as associated with lung cancer risk. These lung cancer risk factors are worthy of further investigation.^
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
Some neural bruise prediction models have been implemented in the laboratory, for the most traded fruit species and varieties, allowing the prediction of the acceptability or rejectability for damages, with respect to the EC Standards. Different models have been built for both quasi-static (compression) and dynamic (impact) loads covering the whole commercial ripening period of fruits. A simulation process has been developed gathering the information on laboratory bruise models and load sensor calibrations for different electronic devices (IS-100 and DEA-1, for impact and compression loads respectively). Some evaluation methodology has been designed gathering the information on the mechanical properties of fruits and the loading records of electronic devices. The evaluation system allows to determine the current stage of fruit handling process and machinery.
Resumo:
Enabling Subject Matter Experts (SMEs) to formulate knowledge without the intervention of Knowledge Engineers (KEs) requires providing SMEs with methods and tools that abstract the underlying knowledge representation and allow them to focus on modeling activities. Bridging the gap between SME-authored models and their representation is challenging, especially in the case of complex knowledge types like processes, where aspects like frame management, data, and control flow need to be addressed. In this paper, we describe how SME-authored process models can be provided with an operational semantics and grounded in a knowledge representation language like F-logic in order to support process-related reasoning. The main results of this work include a formalism for process representation and a mechanism for automatically translating process diagrams into executable code following such formalism. From all the process models authored by SMEs during evaluation 82% were well-formed, all of which executed correctly. Additionally, the two optimizations applied to the code generation mechanism produced a performance improvement at reasoning time of 25% and 30% with respect to the base case, respectively.
Resumo:
Los alimentos son sistemas complejos, formados por diversas estructuras a diferentes escalas: macroscópica y microscópica. Muchas propiedades de los alimentos, que son importantes para su procesamiento, calidad y tratamiento postcosecha, están relacionados con su microestructura. La presente tesis doctoral propone una metodología completa para la determinación de la estructura de alimentos desde un punto de vista multi-escala, basándose en métodos de Resonancia Magnética Nuclear (NMR). Las técnicas de NMR son no invasivas y no destructivas y permiten el estudio tanto de macro- como de microestructura. Se han utilizado distintos procedimientos de NMR dependiendo del nivel que se desea estudiar. Para el nivel macroestructural, la Imagen de Resonancia Magnética (MRI) ha resultado ser muy útil para la caracterización de alimentos. Para el estudio microestructural, la MRI requiere altos tiempos de adquisición, lo que hace muy difícil la transferencia de esta técnica a aplicaciones en industria. Por tanto, la optimización de procedimientos de NMR basados en secuencias relaxometría 2D T1/T2 ha resultado ser una estrategia primordial en esta tesis. Estos protocolos de NMR se han implementado satisfactoriamente por primera vez en alto campo magnético. Se ha caracterizado la microestructura de productos alimentarios enteros por primera vez utilizando este tipo de protocolos. Como muestras, se han utilizado dos tipos de productos: modelos de alimentos y alimentos reales (manzanas). Además, como primer paso para su posterior implementación en la industria agroalimentaria, se ha mejorado una línea transportadora, especialmente diseñada para trabajar bajo condiciones de NMR en trabajos anteriores del grupo LPF-TAGRALIA. Se han estudiado y seleccionado las secuencias más rápidas y óptimas para la detección de dos tipos de desórdenes internos en manzanas: vitrescencia y roturas internas. La corrección de las imágenes en movimiento se realiza en tiempo real. Asimismo, se han utilizado protocolos de visión artificial para la clasificación automática de manzanas potencialmente afectadas por vitrescencia. El presente documento está dividido en diferentes capítulos: el Capítulo 2 explica los antecedentes de la presente tesis y el marco del proyecto en el que se ha desarrollado. El Capítulo 3 recoge el estado del arte. El Capítulo 4 establece los objetivos de esta tesis doctoral. Los resultados se dividen en cinco sub-secciones (dentro del Capítulo 5) que corresponden con los trabajos publicados bien en revistas revisadas por pares, bien en congresos internacionales o bien como capítulos de libros revisados por pares. La Sección 5.1. es un estudio del desarrollo de la vitrescencia en manzanas mediante MRI y lo relaciona con la posición de la fruta dentro de la copa del árbol. La Sección 5.2 presenta un trabajo sobre macro- y microestructura en modelos de alimentos. La Sección 5.3 es un artículo en revisión en una revista revisada por pares, en el que se hace un estudio microestrcutural no destructivo mediante relaxometría 2D T1/T2. la Sección 5.4, hace una comparación entre manzanas afectadas por vitrescencia mediante dos técnicas: tomografía de rayos X e MRI, en manzana. Por último, en la Sección 5.5 se muestra un trabajo en el que se hace un estudio de secuencias de MRI en línea para la evaluación de calidad interna en manzanas. Los siguientes capítulos ofrecen una discusión y conclusiones (Capítulo 6 y 7 respectivamente) de todos los capítulos de esta tesis doctoral. Finalmente, se han añadido tres apéndices: el primero con una introducción de los principios básicos de resonancia magnética nuclear (NMR) y en los otros dos, se presentan sendos estudios sobre el efecto de las fibras en la rehidratación de cereales de desayuno extrusionados, mediante diversas técnicas. Ambos trabajos se presentaron en un congreso internacional. Los resultados más relevantes de la presente tesis doctoral, se pueden dividir en tres grandes bloques: resultados sobre macroestructura, resultados sobre microestructura y resultados sobre MRI en línea. Resultados sobre macroestructura: - La imagen de resonancia magnética (MRI) se aplicó satisfactoriamente para la caracterización de macroestructura. En particular, la reconstrucción 3D de imágenes de resonancia magnética permitió identificar y caracterizar dos tipos distintos de vitrescencia en manzanas: central y radial, que se caracterizan por el porcentaje de daño y la conectividad (número de Euler). - La MRI proveía un mejor contraste para manzanas afectadas por vitrescencia que las imágenes de tomografía de rayos X (X-Ray CT), como se pudo verificar en muestras idénticas de manzana. Además, el tiempo de adquisición de la tomografía de rayos X fue alrededor de 12 veces mayor (25 minutos) que la adquisición de las imágenes de resonancia magnética (2 minutos 2 segundos). Resultados sobre microestructura: - Para el estudio de microestructura (nivel subcelular) se utilizaron con éxito secuencias de relaxometría 2D T1/T2. Estas secuencias se usaron por primera vez en alto campo y sobre piezas de alimento completo, convirtiéndose en una forma no destructiva de llevar a cabo estudios de microestructura. - El uso de MRI junto con relaxometría 2D T1/T2 permite realizar estudios multiescala en alimentos de forma no destructiva. Resultados sobre MRI en línea: - El uso de imagen de resonancia magnética en línea fue factible para la identificación de dos tipos de desórdenes internos en manzanas: vitrescencia y podredumbre interna. Las secuencias de imagen tipo FLASH resultaron adecuadas para la identificación en línea de vitrescencia en manzanas. Se realizó sin selección de corte, debido a que la vitrescencia puede desarrollarse en cualquier punto del volumen de la manzana. Se consiguió reducir el tiempo de adquisición, de modo que se llegaron a adquirir 1.3 frutos por segundos (758 ms por fruto). Las secuencias de imagen tipo UFLARE fueron adecuadas para la detección en línea de la podredumbre interna en manzanas. En este caso, se utilizó selección de corte, ya que se trata de un desorden que se suele localizar en la parte central del volumen de la manzana. Se consiguió reducir el tiempo de adquisicón hasta 0.67 frutos por segundo (1475 ms por fruto). En ambos casos (FLASH y UFLARE) fueron necesarios algoritmos para la corrección del movimiento de las imágenes en tiempo real. ABSTRACT Food is a complex system formed by several structures at different scales: macroscopic and microscopic. Many properties of foods that are relevant to process engineering or quality and postharvest treatments are related to their microstructure. This Ph.D Thesis proposes a complete methodology for food structure determination, in a multiscale way, based on the Nuclear Magnetic Resonance (NMR) phenomenon since NMR techniques are non-invasive and non-destructive, and allow both, macro- and micro-structure study. Different NMR procedures are used depending on the structure level under study. For the macrostructure level, Magnetic Resonance Imaging (MRI) revealed its usefulness for food characterization. For microstructure insight, MRI required high acquisition times, which is a hindrance for transference to industry applications. Therefore, optimization of NMR procedures based on T1/T2 relaxometry sequences was a key strategy in this Thesis. These NMR relaxometry protocols, are successfully implemented in high magnetic field. Microstructure of entire food products have been characterized for the first time using these protocols. Two different types of food products have been studied: food models and actual food (apples). Furthermore, as a first step for the food industry implementation, a grading line system, specially designed for working under NMR conditions in previous works of the LPF-TAGRALIA group, is improved. The study and selection of the most suitable rapid sequence to detect two different types of disorders in apples (watercore and internal breakdown) is performed and the real time image motion correction is applied. In addition, artificial vision protocols for the automatic classification of apples potentially affected by watercore are applied. This document is divided into seven different chapters: Chapter 2 explains the thesis background and the framework of the project in which it has been worked. Chapter 3 comprises the state of the art. Chapter 4 establishes de objectives of this Ph.D thesis. The results are divided into five different sections (in Chapter 5) that correspond to published peered reviewed works. Section 5.1 assesses the watercore development in apples with MRI and studies the effect of fruit location in the canopy. Section 5.2 is an MRI and 2D relaxometry study for macro- and microstructure assessment in food models. Section 5.3 is a non-destructive microstructural study using 2D T1/T2 relaxometry on watercore affected apples. Section 5.4 makes a comparison of X-ray CT and MRI on watercore disorder of different apple cultivars. Section 5.5, that is a study of online MRI sequences for the evaluation of apple internal quality. The subsequent chapters offer a general discussion and conclusions (Chapter 6 and Chapter 7 respectively) of all the works performed in the frame of this Ph.D thesis (two peer reviewed journals, one book chapter and one international congress).Finally, three appendices are included in which an introduction to NMR principles is offered and two published proceedings regarding the effect of fiber on the rehydration of extruded breakfast cereal are displayed. The most relevant results can be summarized into three sections: results on macrostructure, results on microstructure and results on on-line MRI. Results on macrostructure: - MRI was successfully used for macrostructure characterization. Indeed, 3D reconstruction of MRI in apples allows to identify two different types of watercore (radial and block), which are characterized by the percentage of damage and the connectivity (Euler number). - MRI provides better contrast for watercore than X-Ray CT as verified on identical samples. Furthermore, X-Ray CT images acquisition time was around 12 times higher (25 minutes) than MRI acquisition time (2 minutes 2 seconds). Results on microstructure: - 2D T1/T2 relaxometry were successfully applied for microstructure (subcellular level) characterization. 2D T1/T2 relaxometry sequences have been applied for the first time on high field for entire food pieces, being a non-destructive way to achieve microstructure study. - The use of MRI together with 2D T1/T2 relaxometry sequences allows a non-destructive multiscale study of food. Results on on-line MRI: - The use of on-line MRI was successful for the identification of two different internal disorders in apples: watercore and internal breakdown. FLASH imaging was a suitable technique for the on-line detection of watercore disorder in apples, with no slice selection, since watercore is a physiological disorder that may be developed anywhere in the apple volume. 1.3 fruits were imaged per second (768 ms per fruit). UFLARE imaging is a suitable sequence for the on-line detection of internal breakdown disorder in apples. Slice selection was used, as internal breakdown is usually located in the central slice of the apple volume. 0.67 fruits were imaged per second (1475 ms per fruit). In both cases (FLASH and UFLARE) motion correction was performed in real time, during the acquisition of the images.
Resumo:
The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.
Resumo:
All crop models, whether site-specific or global-gridded and regardless of crop, simulate daily crop transpiration and soil evaporation during the crop life cycle, resulting in seasonal crop water use. Modelers use several methods for predicting daily potential evapotranspiration (ET), including FAO-56, Penman-Monteith, Priestley-Taylor, Hargreaves, full energy balance, and transpiration water efficiency. They use extinction equations to partition energy to soil evaporation or transpiration, depending on leaf area index. Most models simulate soil water balance and soil-root water supply for transpiration, and limit transpiration if water uptake is insufficient, and thereafter reduce dry matter production. Comparisons among multiple crop and global gridded models in the Agricultural Model Intercomparison and Improvement Project (AgMIP) show surprisingly large differences in simulated ET and crop water use for the same climatic conditions. Model intercomparisons alone are not enough to know which approaches are correct. There is an urgent need to test these models against field-observed data on ET and crop water use. It is important to test various ET modules/equations in a model platform where other aspects such as soil water balance and rooting are held constant, to avoid compensation caused by other parts of models. The CSM-CROPGRO model in DSSAT already has ET equations for Priestley-Taylor, Penman-FAO-24, Penman-Monteith-FAO-56, and an hourly energy balance approach. In this work, we added transpiration-efficiency modules to DSSAT and AgMaize models and tested the various ET equations against available data on ET, soil water balance, and season-long crop water use of soybean, fababean, maize, and other crops where runoff and deep percolation were known or zero. The different ET modules created considerable differences in predicted ET, growth, and yield.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.