950 resultados para Novel methodology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel phosphoramidite; N,N-diisopropylamino-2-cyanoethyl-ortho-methylbenzylphosphoramidite 1, was prepared. The reaction of 1 with DMTrT and subsequent derivatisation of the phosphite triester product under solution-phase, Michaelis–Arbuzov conditions was investigated. Coupling of 1 with the terminal hydroxyl groups of support-bound oligodeoxyribonucleotides and subsequent reaction with an activated disulfide yielded oligonucleotides bearing a terminal, phosphorothiolate-linked, lipophilic moiety. The oligomers were readily purified using RP-HPLC. Silver(I)-mediated cleavage of the phosphorothiolate linkage and desalting of the oligonucleotides were performed readily in one step to yield cleanly the corresponding phosphate monester-terminated oligomers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel methodology based on instrumented indentation was developed to characterize the mechanical properties of amorphous materials. The approach is based on the concept of a universal postulate that assumes the existence of a characteristic indentation pressure proportional to the hardness. This hypothesis was numerically validated. This method overcomes the limitation of the conventional indentation models (pile-up effects and pressure sensitivity materials).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) have shown their potentials in various applications, which bring a lot of benefits to users from both research and industrial areas. For many setups, it is envisioned thatWSNs will consist of tens to hundreds of nodes that operate on small batteries. However due to the diversity of the deployed environments and resource constraints on radio communication, sensing ability and energy supply, it is a very challenging issue to plan optimized WSN topology and predict its performance before real deployment. During the network planning phase, the connectivity, coverage, cost, network longevity and service quality should all be considered. Therefore it requires designers coping with comprehensive and interdisciplinary knowledge, including networking, radio engineering, embedded system and so on, in order to efficiently construct a reliable WSN for any specific types of environment. Nowadays there is still a lack of the analysis and experiences to guide WSN designers to efficiently construct WSN topology successfully without many trials. Therefore, simulation is a feasible approach to the quantitative analysis of the performance of wireless sensor networks. However the existing planning algorithms and tools, to some extent, have serious limitations to practically design reliable WSN topology: Only a few of them tackle the 3D deployment issue, and an overwhelming number of works are proposed to place devices in 2D scheme. Without considering the full dimension, the impacts of environment to the performance of WSN are not completely studied, thus the values of evaluated metrics such as connectivity and sensing coverage are not sufficiently accurate to make proper decision. Even fewer planning methods model the sensing coverage and radio propagation by considering the realistic scenario where obstacles exist. Radio signals propagate with multi-path phenomenon in the real world, in which direct paths, reflected paths and diffracted paths contribute to the received signal strength. Besides, obstacles between the path of sensor and objects might block the sensing signals, thus create coverage hole in the application. None of the existing planning algorithms model the network longevity and packet delivery capability properly and practically. They often employ unilateral and unrealistic formulations. The optimization targets are often one-sided in the current works. Without comprehensive evaluation on the important metrics, the performance of planned WSNs can not be reliable and entirely optimized. Modeling of environment is usually time consuming and the cost is very high, while none of the current works figure out any method to model the 3D deployment environment efficiently and accurately. Therefore many researchers are trapped by this issue, and their algorithms can only be evaluated in the same scenario, without the possibility to test the robustness and feasibility for implementations in different environments. In this thesis, we propose a novel planning methodology and an intelligent WSN planning tool to assist WSN designers efficiently planning reliable WSNs. First of all, a new method is proposed to efficiently and automatically model the 3D indoor and outdoor environments. To the best of our knowledge, this is the first time that the advantages of image understanding algorithm are applied to automatically reconstruct 3D outdoor and indoor scenarios for signal propagation and network planning purpose. The experimental results indicate that the proposed methodology is able to accurately recognize different objects from the satellite images of the outdoor target regions and from the scanned floor plan of indoor area. Its mechanism offers users a flexibility to reconstruct different types of environment without any human interaction. Thereby it significantly reduces human efforts, cost and time spent on reconstructing a 3D geographic database and allows WSN designers concentrating on the planning issues. Secondly, an efficient ray-tracing engine is developed to accurately and practically model the radio propagation and sensing signal on the constructed 3D map. The engine contributes on efficiency and accuracy to the estimated results. By using image processing concepts, including the kd-tree space division algorithm and modified polar sweep algorithm, the rays are traced efficiently without detecting all the primitives in the scene. The radio propagation model iv is proposed, which emphasizes not only the materials of obstacles but also their locations along the signal path. The sensing signal of sensor nodes, which is sensitive to the obstacles, is benefit from the ray-tracing algorithm via obstacle detection. The performance of this modelling method is robust and accurate compared with conventional methods, and experimental results imply that this methodology is suitable for both outdoor urban scenes and indoor environments. Moreover, it can be applied to either GSM communication or ZigBee protocol by varying frequency parameter of the radio propagation model. Thirdly, WSN planning method is proposed to tackle the above mentioned challenges and efficiently deploy reliable WSNs. More metrics (connectivity, coverage, cost, lifetime, packet latency and packet drop rate) are modeled more practically compared with other works. Especially 3D ray tracing method is used to model the radio link and sensing signal which are sensitive to the obstruction of obstacles; network routing is constructed by using AODV protocol; the network longevity, packet delay and packet drop rate are obtained via simulating practical events in WSNet simulator, which to the best of our knowledge, is the first time that network simulator is involved in a planning algorithm. Moreover, a multi-objective optimization algorithm is developed to cater for the characteristics of WSNs. The capability of providing multiple optimized solutions simultaneously allows users making their own decisions accordingly, and the results are more comprehensively optimized compared with other state-of-the-art algorithms. iMOST is developed by integrating the introduced algorithms, to assist WSN designers efficiently planning reliable WSNs for different configurations. The abbreviated name iMOST stands for an Intelligent Multi-objective Optimization Sensor network planning Tool. iMOST contributes on: (1) Convenient operation with a user-friendly vision system; (2) Efficient and automatic 3D database reconstruction and fast 3D objects design for both indoor and outdoor environments; (3) It provides multiple multi-objective optimized 3D deployment solutions and allows users to configure the network properties, hence it can adapt to various WSN applications; (4) Deployment solutions in the 3D space and the corresponding evaluated performance are visually presented to users; and (5) The Node Placement Module of iMOST is available online as well as the source code of the other two rebuilt heuristics. Therefore WSN designers will be benefit from v this tool on efficiently constructing environment database, practically and efficiently planning reliable WSNs for both outdoor and indoor applications. With the open source codes, they are also able to compare their developed algorithms with ours to contribute to this academic field. Finally, solid real results are obtained for both indoor and outdoor WSN planning. Deployments have been realized for both indoor and outdoor environments based on the provided planning solutions. The measured results coincide well with the estimated results. The proposed planning algorithm is adaptable according to the WSN designer’s desirability and configuration, and it offers flexibility to plan small and large scale, indoor and outdoor 3D deployments. The thesis is organized in 7 chapters. In Chapter 1, WSN applications and motivations of this work are introduced, the state-of-the-art planning algorithms and tools are reviewed, challenges are stated out and the proposed methodology is briefly introduced. In Chapter 2, the proposed 3D environment reconstruction methodology is introduced and its performance is evaluated for both outdoor and indoor environment. The developed ray-tracing engine and proposed radio propagation modelling method are described in details in Chapter 3, their performances are evaluated in terms of computation efficiency and accuracy. Chapter 4 presents the modelling of important metrics of WSNs and the proposed multi-objective optimization planning algorithm, the performance is compared with the other state-of-the-art planning algorithms. The intelligent WSN planning tool iMOST is described in Chapter 5. RealWSN deployments are prosecuted based on the planned solutions for both indoor and outdoor scenarios, important data are measured and results are analysed in Chapter 6. Chapter 7 concludes the thesis and discusses about future works. vi Resumen en Castellano Las redes de sensores inalámbricas (en inglés Wireless Sensor Networks, WSNs) han demostrado su potencial en diversas aplicaciones que aportan una gran cantidad de beneficios para el campo de la investigación y de la industria. Para muchas configuraciones se prevé que las WSNs consistirán en decenas o cientos de nodos que funcionarán con baterías pequeñas. Sin embargo, debido a la diversidad de los ambientes para desplegar las redes y a las limitaciones de recursos en materia de comunicación de radio, capacidad de detección y suministro de energía, la planificación de la topología de la red y la predicción de su rendimiento es un tema muy difícil de tratar antes de la implementación real. Durante la fase de planificación del despliegue de la red se deben considerar aspectos como la conectividad, la cobertura, el coste, la longevidad de la red y la calidad del servicio. Por lo tanto, requiere de diseñadores con un amplio e interdisciplinario nivel de conocimiento que incluye la creación de redes, la ingeniería de radio y los sistemas embebidos entre otros, con el fin de construir de manera eficiente una WSN confiable para cualquier tipo de entorno. Hoy en día todavía hay una falta de análisis y experiencias que orienten a los diseñadores de WSN para construir las topologías WSN de manera eficiente sin realizar muchas pruebas. Por lo tanto, la simulación es un enfoque viable para el análisis cuantitativo del rendimiento de las redes de sensores inalámbricos. Sin embargo, los algoritmos y herramientas de planificación existentes tienen, en cierta medida, serias limitaciones para diseñar en la práctica una topología fiable de WSN: Sólo unos pocos abordan la cuestión del despliegue 3D mientras que existe una gran cantidad de trabajos que colocan los dispositivos en 2D. Si no se analiza la dimensión completa (3D), los efectos del entorno en el desempeño de WSN no se estudian por completo, por lo que los valores de los parámetros evaluados, como la conectividad y la cobertura de detección, no son lo suficientemente precisos para tomar la decisión correcta. Aún en menor medida los métodos de planificación modelan la cobertura de los sensores y la propagación de la señal de radio teniendo en cuenta un escenario realista donde existan obstáculos. Las señales de radio en el mundo real siguen una propagación multicamino, en la que los caminos directos, los caminos reflejados y los caminos difractados contribuyen a la intensidad de la señal recibida. Además, los obstáculos entre el recorrido del sensor y los objetos pueden bloquear las señales de detección y por lo tanto crear áreas sin cobertura en la aplicación. Ninguno de los algoritmos de planificación existentes modelan el tiempo de vida de la red y la capacidad de entrega de paquetes correctamente y prácticamente. A menudo se emplean formulaciones unilaterales y poco realistas. Los objetivos de optimización son a menudo tratados unilateralmente en los trabajos actuales. Sin una evaluación exhaustiva de los parámetros importantes, el rendimiento previsto de las redes inalámbricas de sensores no puede ser fiable y totalmente optimizado. Por lo general, el modelado del entorno conlleva mucho tiempo y tiene un coste muy alto, pero ninguno de los trabajos actuales propone algún método para modelar el entorno de despliegue 3D con eficiencia y precisión. Por lo tanto, muchos investigadores están limitados por este problema y sus algoritmos sólo se pueden evaluar en el mismo escenario, sin la posibilidad de probar la solidez y viabilidad para las implementaciones en diferentes entornos. En esta tesis, se propone una nueva metodología de planificación así como una herramienta inteligente de planificación de redes de sensores inalámbricas para ayudar a los diseñadores a planificar WSNs fiables de una manera eficiente. En primer lugar, se propone un nuevo método para modelar demanera eficiente y automática los ambientes interiores y exteriores en 3D. Según nuestros conocimientos hasta la fecha, esta es la primera vez que las ventajas del algoritmo de _image understanding_se aplican para reconstruir automáticamente los escenarios exteriores e interiores en 3D para analizar la propagación de la señal y viii la planificación de la red. Los resultados experimentales indican que la metodología propuesta es capaz de reconocer con precisión los diferentes objetos presentes en las imágenes satelitales de las regiones objetivo en el exterior y de la planta escaneada en el interior. Su mecanismo ofrece a los usuarios la flexibilidad para reconstruir los diferentes tipos de entornos sin ninguna interacción humana. De este modo se reduce considerablemente el esfuerzo humano, el coste y el tiempo invertido en la reconstrucción de una base de datos geográfica con información 3D, permitiendo así que los diseñadores se concentren en los temas de planificación. En segundo lugar, se ha desarrollado un motor de trazado de rayos (en inglés ray tracing) eficiente para modelar con precisión la propagación de la señal de radio y la señal de los sensores en el mapa 3D construido. El motor contribuye a la eficiencia y la precisión de los resultados estimados. Mediante el uso de los conceptos de procesamiento de imágenes, incluyendo el algoritmo del árbol kd para la división del espacio y el algoritmo _polar sweep_modificado, los rayos se trazan de manera eficiente sin la detección de todas las primitivas en la escena. El modelo de propagación de radio que se propone no sólo considera los materiales de los obstáculos, sino también su ubicación a lo largo de la ruta de señal. La señal de los sensores de los nodos, que es sensible a los obstáculos, se ve beneficiada por la detección de objetos llevada a cabo por el algoritmo de trazado de rayos. El rendimiento de este método de modelado es robusto y preciso en comparación con los métodos convencionales, y los resultados experimentales indican que esta metodología es adecuada tanto para escenas urbanas al aire libre como para ambientes interiores. Por otra parte, se puede aplicar a cualquier comunicación GSM o protocolo ZigBee mediante la variación de la frecuencia del modelo de propagación de radio. En tercer lugar, se propone un método de planificación de WSNs para hacer frente a los desafíos mencionados anteriormente y desplegar redes de sensores fiables de manera eficiente. Se modelan más parámetros (conectividad, cobertura, coste, tiempo de vida, la latencia de paquetes y tasa de caída de paquetes) en comparación con otros trabajos. Especialmente el método de trazado de rayos 3D se utiliza para modelar el enlace de radio y señal de los sensores que son sensibles a la obstrucción de obstáculos; el enrutamiento de la red se construye utilizando el protocolo AODV; la longevidad de la red, retardo de paquetes ix y tasa de abandono de paquetes se obtienen a través de la simulación de eventos prácticos en el simulador WSNet, y según nuestros conocimientos hasta la fecha, es la primera vez que simulador de red está implicado en un algoritmo de planificación. Por otra parte, se ha desarrollado un algoritmo de optimización multi-objetivo para satisfacer las características de las redes inalámbricas de sensores. La capacidad de proporcionar múltiples soluciones optimizadas de forma simultánea permite a los usuarios tomar sus propias decisiones en consecuencia, obteniendo mejores resultados en comparación con otros algoritmos del estado del arte. iMOST se desarrolla mediante la integración de los algoritmos presentados, para ayudar de forma eficiente a los diseñadores en la planificación de WSNs fiables para diferentes configuraciones. El nombre abreviado iMOST (Intelligent Multi-objective Optimization Sensor network planning Tool) representa una herramienta inteligente de planificación de redes de sensores con optimización multi-objetivo. iMOST contribuye en: (1) Operación conveniente con una interfaz de fácil uso, (2) Reconstrucción eficiente y automática de una base de datos con información 3D y diseño rápido de objetos 3D para ambientes interiores y exteriores, (3) Proporciona varias soluciones de despliegue optimizadas para los multi-objetivo en 3D y permite a los usuarios configurar las propiedades de red, por lo que puede adaptarse a diversas aplicaciones de WSN, (4) las soluciones de implementación en el espacio 3D y el correspondiente rendimiento evaluado se presentan visualmente a los usuarios, y (5) El _Node Placement Module_de iMOST está disponible en línea, así como el código fuente de las otras dos heurísticas de planificación. Por lo tanto los diseñadores WSN se beneficiarán de esta herramienta para la construcción eficiente de la base de datos con información del entorno, la planificación práctica y eficiente de WSNs fiables tanto para aplicaciones interiores y exteriores. Con los códigos fuente abiertos, son capaces de comparar sus algoritmos desarrollados con los nuestros para contribuir a este campo académico. Por último, se obtienen resultados reales sólidos tanto para la planificación de WSN en interiores y exteriores. Los despliegues se han realizado tanto para ambientes de interior y como para ambientes de exterior utilizando las soluciones de planificación propuestas. Los resultados medidos coinciden en gran medida con los resultados estimados. El algoritmo de planificación x propuesto se adapta convenientemente al deiseño de redes de sensores inalámbricas, y ofrece flexibilidad para planificar los despliegues 3D a pequeña y gran escala tanto en interiores como en exteriores. La tesis se estructura en 7 capítulos. En el Capítulo 1, se presentan las aplicaciones de WSN y motivaciones de este trabajo, se revisan los algoritmos y herramientas de planificación del estado del arte, se presentan los retos y se describe brevemente la metodología propuesta. En el Capítulo 2, se presenta la metodología de reconstrucción de entornos 3D propuesta y su rendimiento es evaluado tanto para espacios exteriores como para espacios interiores. El motor de trazado de rayos desarrollado y el método de modelado de propagación de radio propuesto se describen en detalle en el Capítulo 3, evaluándose en términos de eficiencia computacional y precisión. En el Capítulo 4 se presenta el modelado de los parámetros importantes de las WSNs y el algoritmo de planificación de optimización multi-objetivo propuesto, el rendimiento se compara con los otros algoritmos de planificación descritos en el estado del arte. La herramienta inteligente de planificación de redes de sensores inalámbricas, iMOST, se describe en el Capítulo 5. En el Capítulo 6 se llevan a cabo despliegues reales de acuerdo a las soluciones previstas para los escenarios interiores y exteriores, se miden los datos importantes y se analizan los resultados. En el Capítulo 7 se concluye la tesis y se discute acerca de los trabajos futuros.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Future climate change will likely represent a major stress to shallow aquatic and coastal marine communities around the world. Most climate change research, particularly in regards to increased pCO2 and ocean acidification, relies on ex situ mesocosm experimentation, isolating target organisms from their environment. Such mesocosms allow for greater experimental control of some variables, but can often cause unrealistic changes in a variety of environmental factors, leading to “bottle effects.” Here we present an in situ technique of altering dissolved pCO2within nearshore benthic communities (e.g., macrophytes, algae, and/or corals) using submerged clear, open-top chambers. Our technique utilizes a flow-through design that replicates natural water flow conditions and minimizes caging effects. The clear, open-top design additionally ensures that adequate light reaches the benthic community. Our results show that CO2 concentrations and pH can be successfully manipulated for long durations within the open-top chambers, continuously replicating forecasts for the year 2100. Enriched chambers displayed an average 0.46 unit reduction in pH as compared with ambient chambers over a 6-month period. Additionally, CO2 and HCO3 – concentrations were all significantly higher within the enriched chambers. We discuss the advantages and disadvantages of this technique in comparison to other ex situ mesocosm designs used for climate change research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stretch blow moulding (SBM) process is the main method for the mass production of PET containers. And understanding the constitutive behaviour of PET during this process is critical for designing the optimum product and process. However due to its nonlinear viscoelastic behaviour, the behaviour of PET is highly sensitive to its thermomechanical history making the task of modelling its constitutive behaviour complex. This means that the constitutive model will be useful only if it is known to be valid under the actual conditions of interest to the SBM process. The aim of this work was to develop a new material characterization method providing new data for the deformation behaviour of PET relevant to the SBM process. In order to achieve this goal, a reliable and robust characterization method was developed based on an instrumented stretch rod and a digital image correlation system to determine the stress-strain relationship of material in deforming preforms during free stretch-blow tests. The effect of preform temperature and air mass flow rate on the deformation behaviour of PET was also investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Union continues to exert a large influence on the direction of member states energy policy. The 2020 targets for renewable energy integration have had significant impact on the operation of current power systems, forcing a rapid change from fossil fuel dominated systems to those with high levels of renewable power. Additionally, the overarching aim of an internal energy market throughout Europe has and will continue to place importance on multi-jurisdictional co-operation regarding energy supply. Combining these renewable energy and multi-jurisdictional supply goals results in a complicated multi-vector energy system, where the understanding of interactions between fossil fuels, renewable energy, interconnection and economic power system operation is increasingly important. This paper provides a novel and systematic methodology to fully understand the changing dynamics of interconnected energy systems from a gas and power perspective. A fully realistic unit commitment and economic dispatch model of the 2030 power systems in Great Britain and Ireland, combined with a representative gas transmission energy flow model is developed. The importance of multi-jurisdictional integrated energy system operation in one of the most strategically important renewable energy regions is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Curves are a common feature of road infrastructure; however crashes on road curves are associated with increased risk of injury and fatality to vehicle occupants. Countermeasures require the identification of contributing factors. However, current approaches to identifying contributors use traditional statistical methods and have not used self-reported narrative claim to identify factors related to the driver, vehicle and environment in a systemic way. Text mining of 3434 road-curve crash claim records filed between 1 January 2003 and 31 December 2005 at a major insurer in Queensland, Australia, was undertaken to identify risk levels and contributing factors. Rough set analysis was used on insurance claim narratives to identify significant contributing factors to crashes and their associated severity. New contributing factors unique to curve crashes were identified (e.g., tree, phone, over-steer) in addition to those previously identified via traditional statistical analysis of Police and licensing authority records. Text mining is a novel methodology to improve knowledge related to risk and contributing factors to road-curve crash severity. Future road-curve crash countermeasures should more fully consider the interrelationships between environment, the road, the driver and the vehicle, and education campaigns in particular could highlight the increased risk of crash on road-curves.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A novel methodology is proposed for the development of neural network models for complex engineering systems exhibiting nonlinearity. This method performs neural network modeling by first establishing some fundamental nonlinear functions from a priori engineering knowledge, which are then constructed and coded into appropriate chromosome representations. Given a suitable fitness function, using evolutionary approaches such as genetic algorithms, a population of chromosomes evolves for a certain number of generations to finally produce a neural network model best fitting the system data. The objective is to improve the transparency of the neural networks, i.e. to produce physically meaningful

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Ready-to-eat (RTE) foods can be readily consumed with minimum or without any further preparation; their processing is complex—involving thorough decontamination processes— due to their composition of mixed ingredients. Compared with conventional preservation technologies, novel processing technologies can enhance the safety and quality of these complex products by reducing the risk of pathogens and/ or by preserving related health-promoting compounds. These novel technologies can be divided into two categories: thermal and non-thermal. As a non-thermal treatment, High Pressure Processing is a very promising novel methodology that can be used even in the already packaged RTE foods. A new “volumetric” microwave heating technology is an interesting cooking and decontamination method directly applied to foods. Cold Plasma technology is a potential substitute of chlorine washing in fresh vegetable decontamination. Ohmic heating is a heating method applicable to viscous products but also to meat products. Producers of RTE foods have to deal with challenging decisions starting from the ingredients suppliers to the distribution chain. They have to take into account not only the cost factor but also the benefits and food products’ safety and quality. Novel processing technologies can be a valuable yet large investment for several SME food manufacturers, but they need support data to be able to make adequate decisions. Within the FP7 Cooperation funded by the European Commission, the STARTEC project aims to develop an IT decision supporting tool to help food business operators in their risk assessment and future decision making when producing RTE foods with or without novel preservation technologies.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A novel methodology is described in which transcriptomics is combined with the measurement of bread-making quality and other agronomic traits for wheat genotypes grown in different environments (wet and cool or hot and dry conditions) to identify transcripts associated with these traits. Seven doubled haploid lines from the Spark x Rialto mapping population were selected to be matched for development and known alleles affecting quality. These were grown in polytunnels with different environments applied 14 days post-anthesis, and the whole experiment was repeated over 2 years. Transcriptomics using the wheat Affymetrix chip was carried out on whole caryopsis samples at two stages during grain filling. Transcript abundance was correlated with the traits for approximately 400 transcripts. About 30 of these were selected as being of most interest, and markers were derived from them and mapped using the population. Expression was identified as being under cis control for 11 of these and under trans control for 18. These transcripts are candidates for involvement in the biological processes which underlie genotypic variation in these traits.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The paper presents the main elements of a project entitled ICT-Emissions that aims at developing a novel methodology to evaluate the impact of ICT-related measures on mobility, vehicle energy consumption and CO2 emissions of vehicle fleets at the local scale, in order to promote the wider application of the most appropriate ICT measures. The proposed methodology combines traffic and emission modelling at micro and macro scales. These will be linked with interfaces and submodules which will be specifically designed and developed. A number of sources are available to the consortium to obtain the necessary input data. Also, experimental campaigns are offered to fill in gaps of information in traffic and emission patterns. The application of the methodology will be demonstrated using commercially available software. However, the methodology is developed in such a way as to enable its implementation by a variety of emission and traffic models. Particular emphasis is given to (a) the correct estimation of driver behaviour, as a result of traffic-related ICT measures, (b) the coverage of a large number of current vehicle technologies, including ICT systems, and (c) near future technologies such as hybrid, plug-in hybrids, and electric vehicles. The innovative combination of traffic, driver, and emission models produces a versatile toolbox that can simulate the impact on energy and CO2 of infrastructure measures (traffic management, dynamic traffic signs, etc.), driver assistance systems and ecosolutions (speed/cruise control, start/stop systems, etc.) or a combination of measures (cooperative systems).The methodology is validated by application in the Turin area and its capacity is further demonstrated by application in real world conditions in Madrid and Rome.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main goal of this project was to develop an efficient methodology allowing rapid access to structurally diverse scaffolds decorated with various functional groups. Initially, we discovered and subsequently developed an experimentally straightforward, high-yielding photoinduced conversion of readily accessible diverse starting materials into polycyclic aldehydes and their (hemi)acetals decorated by various pendants. The two step sequence, involving the Diels-Alder addition of heterocyclic chalcones and other benzoyl ethylenes to a variety of dienes, followed by the Paternò-Büchi reaction, was described as an alkene-carbonyl oxametathesis. This methodology offers a rapid increase in molecular complexity and diversity of the target scaffolds. To develop this novel methodology further and explore its generality, we directed our attention to the Diels-Alder adducts based on various chromones. We discovered that the Diels-Alder adducts of chromones are capable of photoinduced alkene-arene [2+2] cycloaddition producing different dienes, which can either dimerize or be introduced into a double-tandem [4π+2π]·[2π+2π]·[4π+2π]·[2π+2π] synthetic sequence, followed by an acid-catalyzed oxametathesis, leading to a rapid expansion of molecular complexity over a few experimentally simple steps. In view of the fact that oxametathesis previously was primarily observed in aromatic oxetanes, we decided to prepare model aliphatic oxetanes with a conformationally unconstrained or "flexible" methyl group based on the Diels-Alder adducts of cyclohexadiene or cyclopentadiene with methyl vinyl ketone. Upon addition of an acid, the expected oxametathesis occurred with results similar to those observed in the aromatic series proving the generality of this approach. Also we synthesized polycyclic oxetanes resulting from the Diels-Alder adducts of cyclic ketones. This not only gave us access to remarkably strained oxetane systems, but also the mechanism for their protolytic ring opening provided a great deal of insight to how the strain affects the reactivity. Additionally, we discovered that although the model Hetero-Diels-Alder adducts did not undergo [2+2] cycloaddition, both exo- and endo-Sulfa-Diels-Alder products, nonetheless, were photochemically active and various products with defined stereochemistry could be produced upon photolysis. In conclusion, we have developed an approach to the encoding and screening of solution phase libraries based on the photorelease of externally sensitized photolabile tags. The encoding tags can be released into solution only when a binding event occurs between the ligand and the receptor, equipped with an electron transfer sensitizer. The released tags are analyzed in solution revealing the identity of the lead ligand or narrowing the range of potential leads.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work has demonstrated that for the first time a single RAFT agent (i. e., difunctional) can be used in conjunction with a radical initiator to obtain a desired M-n and PDI with controlled rates of polymerization. Simulations were used not only to verify the model but also to provide us with a predictive tool to generate other MWDs. It was also shown that all the MWDs prepared in this work could be translated to higher molecular weights through chain extension experiments with little or no compromise in the control of end group functionality. The ratio of monofunctional to difunctional SdC(CH2Ph)S- end groups, XPX and XP (where X) S=C(CH2Ph) S-), can be controlled by simply changing the concentration of initiator, AIBN. Importantly, the amount of dead polymer is extremely low and fulfils the criterion as suggested by Szwarc (Nature 1956) that to meet living requirements nonfunctional polymeric species formed by side reactions in the process should be undetectable by analytical techniques. In addition, this novel methodology will allow the synthesis of AB, ABA, and statistical multiblock copolymers with predetermined ratios to be produced in a one-pot reaction.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis describes the design and development of an eye alignment/tracking system which allows self alignment of the eye’s optical axis with a measurement axis. Eye alignment is an area of research largely over-looked, yet it is a fundamental requirement in the acquisition of clinical data from the eye. New trends in the ophthalmic market, desiring portable hand-held apparatus, and the application of ophthalmic measurements in areas other than vision care have brought eye alignment under new scrutiny. Ophthalmic measurements taken in hand-held devices with out an clinician present requires alignment in an entirely new set of circumstances, requiring a novel solution. In order to solve this problem, the research has drawn upon eye tracking technology to monitor the eye, and a principle of self alignment to perform alignment correction. A handheld device naturally lends itself to the patient performing alignment, thus a technique has been designed to communicate raw eye tracking data to the user in a manner which allows the user to make the necessary corrections. The proposed technique is a novel methodology in which misalignment to the eye’s optical axis can be quantified, corrected and evaluated. The technique uses Purkinje Image tracking to monitor the eye’s movement as well as the orientation of the optical axis. The use of two sets of Purkinje Images allows quantification of the eye’s physical parameters needed for accurate Purkinje Image tracking, negating the need for prior anatomical data. An instrument employing the methodology was subsequently prototyped and validated, allowing a sample group to achieve self alignment of their optical axis with an imaging axis within 16.5-40.8 s, and with a rotational precision of 0.03-0.043°(95% confidence intervals). By encompassing all these factors the technique facilitates self alignment from an unaligned position on the visual axis to an aligned position on the optical axis. The consequence of this is that ophthalmic measurements, specifically pachymetric measurements, can be made in the absence of an optician, allowing the use of ophthalmic instrumentation and measurements in health professions other than vision care.