829 resultados para Multi-input fuzzy inference system


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We know now from radial velocity surveys and transit space missions thatplanets only a few times more massive than our Earth are frequent aroundsolar-type stars. Fundamental questions about their formation history,physical properties, internal structure, and atmosphere composition are,however, still to be solved. We present here the detection of a systemof four low-mass planets around the bright (V = 5.5) and close-by (6.5pc) star HD 219134. This is the first result of the Rocky Planet Searchprogramme with HARPS-N on the Telescopio Nazionale Galileo in La Palma.The inner planet orbits the star in 3.0935 ± 0.0003 days, on aquasi-circular orbit with a semi-major axis of 0.0382 ± 0.0003AU. Spitzer observations allowed us to detect the transit of the planetin front of the star making HD 219134 b the nearest known transitingplanet to date. From the amplitude of the radial velocity variation(2.25 ± 0.22 ms-1) and observed depth of the transit(359 ± 38 ppm), the planet mass and radius are estimated to be4.36 ± 0.44 M⊕ and 1.606 ± 0.086R⊕, leading to a mean density of 5.76 ± 1.09 gcm-3, suggesting a rocky composition. One additional planetwith minimum-mass of 2.78 ± 0.65 M⊕ moves on aclose-in, quasi-circular orbit with a period of 6.767 ± 0.004days. The third planet in the system has a period of 46.66 ± 0.08days and a minimum-mass of 8.94 ± 1.13 M⊕, at0.233 ± 0.002 AU from the star. Its eccentricity is 0.46 ±0.11. The period of this planet is close to the rotational period of thestar estimated from variations of activity indicators (42.3 ± 0.1days). The planetary origin of the signal is, however, thepreferredsolution as no indication of variation at the corresponding frequency isobserved for activity-sensitive parameters. Finally, a fourth additionallonger-period planet of mass of 71 M⊕ orbits the starin 1842 days, on an eccentric orbit (e = 0.34 ± 0.17) at adistance of 2.56 AU.The photometric time series and radial velocities used in this work areavailable in electronic form at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr(ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A72

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives: In contrast to other countries, surgery still represents the common invasive treatment for varicose veins in Germany. However, radiofrequency ablation, e.g. ClosureFast, becomes more and more popular in other countries due to potential better results and reduced side effects. This treatment option may cause less follow-up costs and is a more convenient procedure for patients, which could justify an introduction in the statutory benefits catalogue. Therefore, we aim at calculating the budget impact of a general reimbursement of ClosureFast in Germany. Methods: To assess the budget impact of including ClosureFast in the German statutory benefits catalogue, we developed a multi-cohort Markov model and compared the costs of a “World with ClosureFast” with a “World without ClosureFast” over a time horizon of five years. To address the uncertainty of input parameters, we conducted three different types of sensitivity analysis (one-way, scenario, probabilistic). Results: In the Base Case scenario, the introduction of the ClosureFast system for the treatment of varicose veins saves costs of about 19.1 Mio. € over a time horizon of five years in Germany. However, the results scatter in the sensitivity analyses due to limited evidence of some key input parameters. Conclusions: Results of the budget impact analysis indicate that a general reimbursement of ClosureFast has the potential to be cost-saving in the German Statutory Health Insurance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We formulate the Becker-Döring equations for cluster growth in the presence of a time-dependent source of monomer input. In the case of size-independent aggregation and ragmentation rate coefficients we find similarity solutions which are approached in the large time limit. The form of the solutions depends on the rate of monomer input and whether fragmentation is present in the model; four distinct types of solution are found.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A lógica fuzzy admite infinitos valores lógicos intermediários entre o falso e o verdadeiro. Com esse princípio, foi elaborado neste trabalho um sistema baseado em regras fuzzy, que indicam o índice de massa corporal de animais ruminantes com objetivo de obter o melhor momento para o abate. O sistema fuzzy desenvolvido teve como entradas as variáveis massa e altura, e a saída um novo índice de massa corporal, denominado Índice de Massa Corporal Fuzzy (IMC Fuzzy), que poderá servir como um sistema de detecção do momento de abate de bovinos, comparando-os entre si através das variáveis linguísticas )Muito BaixaM, ,BaixaB, ,MédiaM, ,AltaA e Muito AltaM. Para a demonstração e aplicação da utilização deste sistema fuzzy, foi feita uma análise de 147 vacas da raça Nelore, determinando os valores do IMC Fuzzy para cada animal e indicando a situação de massa corpórea de todo o rebanho. A validação realizada do sistema foi baseado em uma análise estatística, utilizando o coeficiente de correlação de Pearson 0,923, representando alta correlação positiva e indicando que o método proposto está adequado. Desta forma, o presente método possibilita a avaliação do rebanho, comparando cada animal do rebanho com seus pares do grupo, fornecendo desta forma um método quantitativo de tomada de decisão para o pecuarista. Também é possível concluir que o presente trabalho estabeleceu um método computacional baseado na lógica fuzzy capaz de imitar parte do raciocínio humano e interpretar o índice de massa corporal de qualquer tipo de espécie bovina e em qualquer região do País.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biologically-inspired methods such as evolutionary algorithms and neural networks are proving useful in the field of information fusion. Artificial immune systems (AISs) are a biologically-inspired approach which take inspiration from the biological immune system. Interestingly, recent research has shown how AISs which use multi-level information sources as input data can be used to build effective algorithms for realtime computer intrusion detection. This research is based on biological information fusion mechanisms used by the human immune system and as such might be of interest to the information fusion community. The aim of this paper is to present a summary of some of the biological information fusion mechanisms seen in the human immune system, and of how these mechanisms have been implemented as AISs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In multi-unit organisations such as a bank and its branches or a national body delivering publicly funded health or education services through local operating units, the need arises to incentivize the units to operate efficiently. In such instances, it is generally accepted that units found to be inefficient can be encouraged to make efficiency savings. However, units which are found to be efficient need to be incentivized in a different manner. It has been suggested that efficient units could be incentivized by some reward compatible with the level to which their attainment exceeds that of the best of the rest, normally referred to as “super-efficiency”. A recent approach to this issue (Varmaz et. al. 2013) has used Data Envelopment Analysis (DEA) models to measure the super-efficiency of the whole system of operating units with and without the involvement of each unit in turn in order to provide incentives. We identify shortcomings in this approach and use it as a starting point to develop a new DEA-based system for incentivizing operating units to operate efficiently for the benefit of the aggregate system of units. Data from a small German retail bank is used to illustrate our method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Power flow calculations are one of the most important tools for power system planning and operation. The need to account for uncertainties when performing power flow studies led, among others methods, to the development of the fuzzy power flow (FPF). This kind of models is especially interesting when a scarcity of information exists, which is a common situation in liberalized power systems (where generation and commercialization of electricity are market activities). In this framework, the symmetric/constrained fuzzy power flow (SFPF/CFPF) was proposed in order to avoid some of the problems of the original FPF model. The SFPF/CFPF models are suitable to quantify the adequacy of transmission network to satisfy “reasonable demands for the transmission of electricity” as defined, for instance, in the European Directive 2009/72/EC. In this work it is illustrated how the SFPF/CFPF may be used to evaluate the impact on the adequacy of a transmission system originated by specific investments on new network elements

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The East Asian Monsoon (EAM) is an active component of the global climate system and has a profound social and economic impact in East Asia and its surrounding countries. Its impact on regional hydrological processes may influence society through industrial water supplies, food productivity and energy use. In order to predict future rates of climate change, reliable and accurate reconstructions of regional temperature and rainfall are required from all over the world to test climate models and better predict future climate variability. Hokkaido is a region which has limited palaeo-climate data and is sensitive to climate change. Instrumental data show that the climate in Hokkaido is influenced by the East Asian Monsoon (EAM), however, instrumental data is limited to the past ~150 years. Therefore down-core climate reconstructions, prior to instrumental records, are required to provide a better understanding of the long-term behaviour of the climate drivers (e.g. the EAM, Westerlies, and teleconnections) in this region. The present study develops multi-proxy reconstructions to determine past climatic and hydrologic variability in Japan over the past 1000 years and aid in understanding the effects of the EAM and the Westerlies independently and interactively. A 250-cm long sediment core from Lake Toyoni, Hokkaido was retrieved to investigate terrestrial and aquatic input, lake temperature and hydrological changes over the past 1000-years within Lake Toyoni and its catchment using X-Ray Fluorescence (XRF) data, alkenone palaeothermometry, the molecular and hydrogen isotopic composition of higher plant waxes (δD(HPW)). Here, we conducted the first survey for alkenone biomarkers in eight lakes in the Hokkaido, Japan. We detected the occurrence of alkenones within the sediments of Lake Toyoni. We present the first lacustrine alkenone record from Japan, including genetic analysis of the alkenone producer. C37 alkenone concentrations in surface sediments are 18µg C37 g−1 of dry sediment and the dominant alkenone is C37:4. 18S rDNA analysis revealed the presence of a single alkenone producer in Lake Toyoni and thus a single calibration is used for reconstructing lake temperature based on alkenone unsaturation patterns. Temperature reconstructions over the past 1000 years suggest that lake water temperatures varies between 8 and 19°C which is in line with water temperature changes observed in the modern Lake Toyoni. The alkenone-based temperature reconstruction provides evidence for the variability of the EAM over the past 1000 years. The δD(HPW) suggest that the large fluctuations (∼40‰) represent changes in temperature and source precipitation in this region, which is ultimately controlled by the EAM system and therefore a proxy for the EAM system. In order to complement the biomarker reconstructions, the XRF data strengthen the lake temperature and hydrological reconstructions by providing information on past productivity, which is controlled by the East Asian Summer monsoon (EASM) and wind input into Lake Toyoni, which is controlled by the East Asian Winter Monsoon (EAWM) and the Westerlies. By combining the data generated from XRF, alkenone palaeothermometry and the δD(HPW) reconstructions, we provide valuable information on the EAM and the Westerlies, including; the timing of intensification and weakening, the teleconnections influencing them and the relationship between them. During the Medieval Warm Period (MWP), we find that the EASM dominated and the EAWM was suppressed, whereas, during the Little Ice Age (LIA), the influence of the EAWM dominated with time periods of increased EASM and Westerlies intensification. The El Niño Southern Oscillation (ENSO) significantly influenced the EAM; a strong EASM occurred during El Niño conditions and a strong EAWM occurred during La Niña. The North Atlantic Oscillation, on the other hand, was a key driver of the Westerlies intensification; strengthening of the Westerlies during a positive NAO phase and weakening of the Westerlies during a negative NAO phase. A key finding from this study is that our data support an anti-phase relationship between the EASM and the EAWM (e.g. the intensification of the EASM and weakening of the EAWM and vice versa) and that the EAWM and the Westerlies vary independently from each other, rather than coincide as previously suggested in other studies.