859 resultados para model-based reasoning
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
For future planetary robot missions, multi-robot-systems can be considered as a suitable platform to perform space mission faster and more reliable. In heterogeneous robot teams, each robot can have different abilities and sensor equipment. In this paper we describe a lunar demonstration scenario where a team of mobile robots explores an unknown area and identifies a set of objects belonging to a lunar infrastructure. Our robot team consists of two exploring scout robots and a mobile manipulator. The mission goal is to locate the objects within a certain area, to identify the objects, and to transport the objects to a base station. The robots have a different sensor setup and different capabilities. In order to classify parts of the lunar infrastructure, the robots have to share the knowledge about the objects. Based on the different sensing capabilities, several information modalities have to be shared and combined by the robots. In this work we propose an approach using spatial features and a fuzzy logic based reasoning for distributed object classification.
Resumo:
Mode indicator functions (MIFs) are used in modal testing and analysis as a means of identifying modes of vibration, often as a precursor to modal parameter estimation. Various methods have been developed since the MIF was introduced four decades ago. These methods are quite useful in assisting the analyst to identify genuine modes and, in the case of the complex mode indicator function, have even been developed into modal parameter estimation techniques. Although the various MIFs are able to indicate the existence of a mode, they do not provide the analyst with any descriptive information about the mode. This paper uses the simple summation type of MIF to develop five averaged and normalised MIFs that will provide the analyst with enough information to identify whether a mode is longitudinal, vertical, lateral or torsional. The first three functions, termed directional MIFs, have been noted in the literature in one form or another; however, this paper introduces a new twist on the MIF by introducing two MIFs, termed torsional MIFs, that can be used by the analyst to identify torsional modes and, moreover, can assist in determining whether the mode is of a pure torsion or sway type (i.e., having a rigid cross-section) or a distorted twisting type. The directional and torsional MIFs are tested on a finite element model based simulation of an experimental modal test using an impact hammer. Results indicate that the directional and torsional MIFs are indeed useful in assisting the analyst to identify whether a mode is longitudinal, vertical, lateral, sway, or torsion.
Resumo:
To investigate potentially dissociable recognition memory responses in the hippocampus and perirhinal cortex, fMRI studies have often used confidence ratings as an index of memory strength. Confidence ratings, although correlated with memory strength, also reflect sources of variability, including task-irrelevant item effects and differences both within and across individuals in terms of applying decision criteria to separate weak from strong memories. We presented words one, two, or four times at study in each of two different conditions, focused and divided attention, and then conducted separate fMRI analyses of correct old responses on the basis of subjective confidence ratings or estimates from single- versus dual-process recognition memory models. Overall, the effect of focussing attention on spaced repetitions at study manifested as enhanced recognition memory performance. Confidence- versus model-based analyses revealed disparate patterns of hippocampal and perirhinal cortex activity at both study and test and both within and across hemispheres. The failure to observe equivalent patterns of activity indicates that fMRI signals associated with subjective confidence ratings reflect additional sources of variability. The results are consistent with predictions of single-process models of recognition memory.
Resumo:
Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20-30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼ 6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
A model based on the cluster process representation of the self-exciting process model in White and Porter 2013 and Ruggeri and Soyer 2008is derived to allow for variation in the excitation effects for terrorist events in a self-exciting or cluster process model. The details of the model derivation and implementation are given and applied to data from the Global Terrorism Database from 2000–2012. Results are discussed in terms of practical interpretation along with implications for a theoretical model paralleling existing criminological theory.
Resumo:
The world has experienced a large increase in the amount of available data. Therefore, it requires better and more specialized tools for data storage and retrieval and information privacy. Recently Electronic Health Record (EHR) Systems have emerged to fulfill this need in health systems. They play an important role in medicine by granting access to information that can be used in medical diagnosis. Traditional systems have a focus on the storage and retrieval of this information, usually leaving issues related to privacy in the background. Doctors and patients may have different objectives when using an EHR system: patients try to restrict sensible information in their medical records to avoid misuse information while doctors want to see as much information as possible to ensure a correct diagnosis. One solution to this dilemma is the Accountable e-Health model, an access protocol model based in the Information Accountability Protocol. In this model patients are warned when doctors access their restricted data. They also enable a non-restrictive access for authenticated doctors. In this work we use FluxMED, an EHR system, and augment it with aspects of the Information Accountability Protocol to address these issues. The Implementation of the Information Accountability Framework (IAF) in FluxMED provides ways for both patients and physicians to have their privacy and access needs achieved. Issues related to storage and data security are secured by FluxMED, which contains mechanisms to ensure security and data integrity. The effort required to develop a platform for the management of medical information is mitigated by the FluxMED's workflow-based architecture: the system is flexible enough to allow the type and amount of information being altered without the need to change in your source code.
Resumo:
If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.
Resumo:
Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future
Resumo:
Nitrogen plasma exposure (NPE) effects on indium doped bulk n-CdTe are reported here. Excellent rectifying characteristics of Au/n-CdTe Schottky diodes, with an increase in the barrier height, and large reverse breakdown voltages are observed after the plasma exposure. Surface damage is found to be absent in the plasma exposed samples. The breakdown mechanism of the heavily doped Schottky diodes is found to shift from the Zener to avalanche after the nitrogen plasma exposure, pointing to a change in the doping close to the surface which was also verified by C-V measurements. The thermal stability of the plasma exposure process is seen up to a temperature of 350 degrees C, thereby enabling the high temperature processing of the samples for device fabrication. The characteristics of the NPE diodes are stable over a year implying excellent diode quality. A plausible model based on Fermi level pinning by acceptor-like states created by plasma exposure is proposed to explain the observations.
Resumo:
The ultrasonic degradation of poly(acrylic acid), a water-soluble polymer, was studied in the presence of persulfates at different temperatures in binary solvent Mixtures of methanol and water. The degraded samples were analyzed by gel permeation chromatography for the time evolution of the molecular weight distributions. A continuous distribution kinetics model based on midpoint chain scission was developed, and the degradation rate coefficients were determined. The decline in the rate of degradation of poly(acrylic acid) with increasing temperature and with an increment in the methanol content in the binary solvent mixture of methanol and water was attributed to the increased vapor pressure of the solutions. The experimental data showed an augmentation of the degradation rate of the polymer with increasing oxidizing agent (persulfate) concentrations. Different concentrations of three persulfates-potassium persulfate, ammonium persulfate, and sodium persulfate-were used. It was found that the ratio of the polymer degradation rate coefficient to the dissociation rate constant of the persulfate was constant. This implies that the ultrasonic degradation rate of poly(acrylic acid) can be determined a priori in the presence of any initiator.
Resumo:
Thin films are developed by dispersing carbon black nanoparticles and carbon nanotubes (CNTs) in an epoxy polymer. The films show a large variation in electrical resistance when subjected to quasi-static and dynamic mechanical loading. This phenomenon is attributed to the change in the band-gap of the CNTs due to the applied strain, and also to the change in the volume fraction of the constituent phases in the percolation network. Under quasi-static loading, the films show a nonlinear response. This nonlinearity in the response of the films is primarily attributed to the pre-yield softening of the epoxy polymer. The electrical resistance of the films is found to be strongly dependent on the magnitude and frequency of the applied dynamic strain, induced by a piezoelectric substrate. Interestingly, the resistance variation is found to be a linear function of frequency and dynamic strain. Samples with a small concentration of just 0.57% of CNT show a sensitivity as high as 2.5% MPa-1 for static mechanical loading. A mathematical model based on Bruggeman's effective medium theory is developed to better understand the experimental results. Dynamic mechanical loading experiments reveal a sensitivity as high as 0.007% Hz(-1) at a constant small-amplitude vibration and up to 0.13%/mu-strain at 0-500 Hz vibration. Potential applications of such thin films include highly sensitive strain sensors, accelerometers, artificial neural networks, artificial skin and polymer electronics.
Resumo:
The mesoscale simulation of a lamellar mesophase based on a free energy functional is examined with the objective of determining the relationship between the parameters in the model and molecular parameters. Attention is restricted to a symmetric lamellar phase with equal volumes of hydrophilic and hydrophobic components. Apart from the lamellar spacing, there are two parameters in the free energy functional. One of the parameters, r, determines the sharpness of the interface, and it is shown how this parameter can be obtained from the interface profile in a molecular simulation. The other parameter, A, provides an energy scale. Analytical expressions are derived to relate these parameters to r and A to the bending and compression moduli and the permeation constant in the macroscopic equation to the Onsager coefficient in the concentration diffusion equation. The linear hydrodynamic response predicted by the theory is verified by carrying out a mesoscale simulation using the lattice-Boltzmann technique and verifying that the analytical predictions are in agreement with simulation results. A macroscale model based on the layer thickness field and the layer normal field is proposed, and the relationship between the parameters in the macroscale model from the parameters in the mesoscale free energy functional is obtained.
Resumo:
In this paper two nonlinear model based control algorithms have been developed to monitor the magnetorheological (MR) damper voltage. The main advantage of the proposed algorithms is that it is possible to directly monitor the voltage required to control the structural vibration considering the effect of the supplied and commanded voltage dynamics of the damper. The efficiency of the proposed techniques has been shown and compared taking an example of a base isolated three-storey building under a set of seismic excitations. Comparison of the performances with a fuzzy based intelligent control algorithm and a widely used clipped optimal strategy has also been shown.