187 resultados para model-based reasoning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The world has experienced a large increase in the amount of available data. Therefore, it requires better and more specialized tools for data storage and retrieval and information privacy. Recently Electronic Health Record (EHR) Systems have emerged to fulfill this need in health systems. They play an important role in medicine by granting access to information that can be used in medical diagnosis. Traditional systems have a focus on the storage and retrieval of this information, usually leaving issues related to privacy in the background. Doctors and patients may have different objectives when using an EHR system: patients try to restrict sensible information in their medical records to avoid misuse information while doctors want to see as much information as possible to ensure a correct diagnosis. One solution to this dilemma is the Accountable e-Health model, an access protocol model based in the Information Accountability Protocol. In this model patients are warned when doctors access their restricted data. They also enable a non-restrictive access for authenticated doctors. In this work we use FluxMED, an EHR system, and augment it with aspects of the Information Accountability Protocol to address these issues. The Implementation of the Information Accountability Framework (IAF) in FluxMED provides ways for both patients and physicians to have their privacy and access needs achieved. Issues related to storage and data security are secured by FluxMED, which contains mechanisms to ensure security and data integrity. The effort required to develop a platform for the management of medical information is mitigated by the FluxMED's workflow-based architecture: the system is flexible enough to allow the type and amount of information being altered without the need to change in your source code.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reduced economic circumstances havemoved management goals towards higher profit, rather than maximum sustainable yields in several Australian fisheries. The eastern king prawn is one such fishery, for which we have developed new methodology for stock dynamics, calculation of model-based and data-based reference points and management strategy evaluation. The fishery is notable for the northward movement of prawns in eastern Australian waters, from the State jurisdiction of New South Wales to that of Queensland, as they grow to spawning size, so that vessels fishing in the northern deeper waters harvest more large prawns. Bioeconomic fishing data were standardized for calibrating a length-structured spatial operating model. Model simulations identified that reduced boat numbers and fishing effort could improve profitability while retaining viable fishing in each jurisdiction. Simulations also identified catch rate levels that were effective for monitoring in simple within-year effort-control rules. However, favourable performance of catch rate indicators was achieved only when a meaningful upper limit was placed on total allowed fishing effort. Themethods and findings will allow improved measures for monitoring fisheries and inform decision makers on the uncertainty and assumptions affecting economic indicators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we offer a single case study using an action research method for gathering and analysing data offering insights valuable to both design and research supervision practice. We do not attempt to generalise from this single case, but offer it as an instance that can improve our understanding of research supervision practice. We question the conventional ‘dyadic’ models of research supervision and outline a more collaborative model, based on the signature pedagogy of architecture: the design studio. A novel approach to the supervision of creatively oriented post-graduate students is proposed, including new approaches to design methods and participatory supervision that draw on established design studio practices. This model collapses the distance between design and research activities. Our case study involving Research Masters student supervision in the discipline of Architecture, shows how ‘connected learning’ emerges from this approach. This type of learning builds strong elements of creativity and fun, which promote and enhance student engagement. The results of our action research suggests that students learn to research more easily in such an environment and supervisory practices are enhanced when we apply the techniques and characteristics of design studio pedagogy to the more conventional research pedagogies imported from the humanities. We believe that other creative disciplines can apply similar tactics to enrich both the creative practice of research and the supervision of HDR students.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We followed by X-ray Photoelectron Spectroscopy (XPS) the time evolution of graphene layers obtained by annealing 3C SiC(111)/Si(111) crystals at different temperatures. The intensity of the carbon signal provides a quantification of the graphene thickness as a function of the annealing time, which follows a power law with exponent 0.5. We show that a kinetic model, based on a bottom-up growth mechanism, provides a full explanation to the evolution of the graphene thickness as a function of time, allowing to calculate the effective activation energy of the process and the energy barriers, in excellent agreement with previous theoretical results. Our study provides a complete and exhaustive picture of Si diffusion into the SiC matrix, establishing the conditions for a perfect control of the graphene growth by Si sublimation.