803 resultados para nature-based


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Normal grain growth of calcite was investigated by combining grain size analysis of calcite across the contact aureole of the Adamello pluton, and grain growth modeling based on a thermal model of the surroundings of the pluton. In an unbiased model system, i.e., location dependent variations in temperature-time path, 2/3 and 1/3 of grain growth occurs during pro- and retrograde metamorphism at all locations, respectively. In contrast to this idealized situation, in the field example three groups can be distinguished, which are characterized by variations in their grain size versus temperature relationships: Group I occurs at low temperatures and the grain size remains constant because nano-scale second phase particles of organic origin inhibit grain growth in the calcite aggregates under these conditions. In the presence of an aqueous fluid, these second phases decay at a temperature of about 350 °C enabling the onset of grain growth in calcite. In the following growth period, fluid-enhanced group II and slower group III growth occurs. For group II a continuous and intense grain size increase with T is typical while the grain growth decreases with T for group III. None of the observed trends correlate with experimentally based grain growth kinetics, probably due to differences between nature and experiment which have not yet been investigated (e.g., porosity, second phases). Therefore, grain growth modeling was used to iteratively improve the correlation between measured and modeled grain sizes by optimizing activation energy (Q), pre-exponential factor (k0) and grain size exponent (n). For n=2, Q of 350 kJ/mol, k0 of 1.7×1021 μmns−1 and Q of 35 kJ/mol, k0 of 2.5×10-5 μmns−1 were obtained for group II and III, respectively. With respect to future work, field-data based grain growth modeling might be a promising tool for investigating the influences of secondary effects like porosity and second phases on grain growth in nature, and to unravel differences between nature and experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract-The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques. Notes: Uwe Aickelin, University of the West of England, Coldharbour Lane, Bristol, BS16 1QY, UK

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the ever-growing amount of connected sensors (IoT), making sense of sensed data becomes even more important. Pervasive computing is a key enabler for sustainable solutions, prominent examples are smart energy systems and decision support systems. A key feature of pervasive systems is situation awareness which allows a system to thoroughly understand its environment. It is based on external interpretation of data and thus relies on expert knowledge. Due to the distinct nature of situations in different domains and applications, the development of situation aware applications remains a complex process. This thesis is concerned with a general framework for situation awareness which simplifies the development of applications. It is based on the Situation Theory Ontology to provide a foundation for situation modelling which allows knowledge reuse. Concepts of the Situation Theory are mapped to the Context Space Theory which is used for situation reasoning. Situation Spaces in the Context Space are automatically generated with the defined knowledge. For the acquisition of sensor data, the IoT standards O-MI/O-DF are integrated into the framework. These allow a peer-to-peer data exchange between data publisher and the proposed framework and thus a platform independent subscription to sensed data. The framework is then applied for a use case to reduce food waste. The use case validates the applicability of the framework and furthermore serves as a showcase for a pervasive system contributing to the sustainability goals. Leading institutions, e.g. the United Nations, stress the need for a more resource efficient society and acknowledge the capability of ICT systems. The use case scenario is based on a smart neighbourhood in which the system recommends the most efficient use of food items through situation awareness to reduce food waste at consumption stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ligand-protein docking is an optimization problem based on predicting the position of a ligand with the lowest binding energy in the active site of the receptor. Molecular docking problems are traditionally tackled with single-objective, as well as with multi-objective approaches, to minimize the binding energy. In this paper, we propose a novel multi-objective formulation that considers: the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands and the binding (intermolecular) energy, as two objectives to evaluate the quality of the ligand-protein interactions. To determine the kind of Pareto front approximations that can be obtained, we have selected a set of representative multi-objective algorithms such as NSGA-II, SMPSO, GDE3, and MOEA/D. Their performances have been assessed by applying two main quality indicators intended to measure convergence and diversity of the fronts. In addition, a comparison with LGA, a reference single-objective evolutionary algorithm for molecular docking (AutoDock) is carried out. In general, SMPSO shows the best overall results in terms of energy and RMSD (value lower than 2A for successful docking results). This new multi-objective approach shows an improvement over the ligand-protein docking predictions that could be promising in in silico docking studies to select new anticancer compounds for therapeutic targets that are multidrug resistant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current institutions, research, and legislation have not yet been sufficient to achieve the conservation level of Nature as required by the society. One of the reasons that explains this relative failure is the lack of incentives to motivate local individual and Nature users in general, to adopt behaviour compliant with Nature sustainable uses. Economists believe that, from the welfare point of view, pricing is the more efficient way to make economic actors to take more environmental friendly decisions. In this paper we will discuss how efficient can be the act of pricing the recreation use of a specific natural area, in terms of maximising welfare. The main conservation issues for pricing recreation use, as well as the conditions under which pricing will be an efficient and fair instrument for the natural area will be outlined. We will conclude two things. Firstly that, from the rational utilitarian economic behaviour point of view, economic efficiency can only be achieved if the natural area has positive and known recreation marginal costs under the relevant range of the marshallian demand recreation curve and if price system management is not costly. Secondly, in order to guarantee equity for the different type of visitors when charging the fee, it is necessary to discuss differential price systems. We shall see that even if marginal recreation costs exist but are unknown, pricing recreation is still an equity instrument and a useful one from the conservation perspective, as we shall demonstrate through an empirical application to the Portuguese National Park. An individual Travel Cost Method Approach will be used to estimate the recreation price that will be set equal to the visitor’s marginal willingness to pay for a day of visit in the national park. Although not efficient, under certain conditions this can be considered a fair pricing practice, because some of the negative recreation externalities will be internalised. We shall discuss the conditions that guarantee equity on charging for the Portuguese case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obstacle courses are an activity that brings a group of people together, in which teams are organized to navigate an established route during a set period of time, completing tasks (clues, riddles, or challenges) to meet a defined goal.  Rules and safety norms are explained to all participants; however, they are not informed of the location of the clues, riddles, or challenges.  The following should be considered when organizing an obstacle course: objective, topic to be developed, location, materials needed, clues, riddles or challenges that may be included, and how to supervise that all teams pass the checkpoints.  Like any other activity with a touch of competitiveness, fair play and respect should be above any interest.  If, for any reason, any of the teams has an emergency, solidarity should prevail, and the activity can be used to teach values.  An adventurous spirit is also essential in this activity.  The desire for the unknown and the new challenges individuals and groups.  This activity helps groups of friends, children, adults, families, etc. share a nice and healthy day together in contact with nature, rescuing concepts such as cooperation, cleverness and, particularly, team work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future power grids are envisioned to be serviced by heterogeneous arrangements of renewable energy sources. Due to their stochastic nature, energy storage distribution and management are pivotal in realizing microgrids serviced heavily by renewable energy assets. Identifying the required response characteristics to meet the operational requirements of a power grid are of great importance and must be illuminated in order to discern optimal hardware topologies. Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) presents the tools to identify such characteristics. By using energy storage as actuation within the closed loop controller, the response requirements may be identified while providing a decoupled controller solution. A DC microgrid servicing a fixed RC load through source and bus level storage managed by HSSPFC was realized in hardware. A procedure was developed to calibrate the DC microgrid architecture of this work to the reduced order model used by the HSSPFC law. Storage requirements were examined through simulation and experimental testing. Bandwidth contributions between feed forward and PI components of the HSSPFC law are illuminated and suggest the need for well-known system losses to prevent the need for additional overhead in storage allocations. The following work outlines the steps taken in realizing a DC microgrid and presents design considerations for system calibration and storage requirements per the closed loop controls for future DC microgrids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We organized an international campaign to observe the blazar 0716+714 in the optical band. The observations took place from February 24, 2009 to February 26, 2009. The global campaign was carried out by observers from more that sixteen countries and resulted in an extended light curve nearly seventy-eight hours long. The analysis and the modeling of this light curve form the main work of this dissertation project. In the first part of this work, we present the time series and noise analyses of the data. The time series analysis utilizes discrete Fourier transform and wavelet analysis routines to search for periods in the light curve. We then present results of the noise analysis which is based on the idea that each microvariability curve is the realization of the same underlying stochastic noise processes in the blazar jet. Neither reoccuring periods nor random noise can successfully explain the observed optical fluctuations. Hence in the second part, we propose and develop a new model to account for the microvariability we see in blazar 0716+714. We propose that the microvariability is due to the emission from turbulent regions in the jet that are energized by the passage of relativistic shocks. Emission from each turbulent cell forms a pulse of emission, and when convolved with other pulses, yields the observed light curve. We use the model to obtain estimates of the physical parameters of the emission regions in the jet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in the electric & hybrid electric vehicles and rapid developments in the electronic devices have increased the demand for high power and high energy density lithium ion batteries. Graphite (theoretical specific capacity: 372 mAh/g) used in commercial anodes cannot meet these demands. Amorphous SnO2 anodes (theoretical specific capacity: 781 mAh/g) have been proposed as alternative anode materials. But these materials have poor conductivity, undergo a large volume change during charging and discharging, large irreversible capacity loss leading to poor cycle performances. To solve the issues related to SnO2 anodes, we propose to synthesize porous SnO2 composites using electrostatic spray deposition technique. First, porous SnO2/CNT composites were fabricated and the effects of the deposition temperature (200,250, 300 oC) & CNT content (10, 20, 30, 40 wt %) on the electrochemical performance of the anodes were studied. Compared to pure SnO2 and pure CNT, the composite materials as anodes showed better discharge capacity and cyclability. 30 wt% CNT content and 250 oC deposition temperature were found to be the optimal conditions with regard to energy capacity whereas the sample with 20% CNT deposited at 250 oC exhibited good capacity retention. This can be ascribed to the porous nature of the anodes and the improvement in the conductivity by the addition of CNT. Electrochemical impedance spectroscopy studies were carried out to study in detail the change in the surface film resistance with cycling. By fitting EIS data to an equivalent circuit model, the values of the circuit components, which represent surface film resistance, were obtained. The higher the CNT content in the composite, lower the change in surface film resistance at certain voltage upon cycling. The surface resistance increased with the depth of discharge and decreased slightly at fully lithiated state. Graphene was also added to improve the performance of pure SnO2 anodes. The composites heated at 280 oC showed better energy capacity and energy density. The specific capacities of as deposited and post heat-treated samples were 534 and 737 mAh/g after 70 cycles. At the 70th cycle, the energy density of the composites at 195 °C and 280 °C were 1240 and 1760 Wh/kg, respectively, which are much higher than the commercially used graphite electrodes (37.2-74.4 Wh/kg). Both SnO2/CNTand SnO2/grapheme based composites with improved energy densities and capacities than pure SnO2 can make a significant impact on the development of new batteries for electric vehicles and portable electronics applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety in civil aviation is increasingly important due to the increase in flight routes and their more challenging nature. Like other important systems in aircraft, fuel level monitoring is always a technical challenge. The most frequently used level sensors in aircraft fuel systems are based on capacitive, ultrasonic and electric techniques, however they suffer from intrinsic safety concerns in explosive environments combined with issues relating to reliability and maintainability. In the last few years, optical fiber liquid level sensors (OFLLSs) have been reported to be safe and reliable and present many advantages for aircraft fuel measurement. Different OFLLSs have been developed, such as the pressure type, float type, optical radar type, TIR type and side-leaking type. Amongst these, many types of OFLLSs based on fiber gratings have been demonstrated. However, these sensors have not been commercialized because they exhibit some drawbacks: low sensitivity, limited range, long-term instability, or limited resolution. In addition, any sensors that involve direct interaction of the optical field with the fuel (either by launching light into the fuel tank or via the evanescent field of a fiber-guided mode) must be able to cope with the potential build up of contamination-often bacterial-on the optical surface. In this paper, a fuel level sensor based on microstructured polymer optical fiber Bragg gratings (mPOFBGs), including poly (methyl methacrylate) (PMMA) and TOPAS fibers, embedded in diaphragms is investigated in detail. The mPOFBGs are embedded in two different types of diaphragms and their performance is investigated with aviation fuel for the first time, in contrast to our previous works, where water was used. Our new system exhibits a high performance when compared with other previously published in the literature, making it a potentially useful tool for aircraft fuel monitoring.