884 resultados para Inference module
Resumo:
Background: A shift toward a rehabilitative model of care has prompted the Newfoundland and Labrador Youth Centre to institute a policy restricting seclusion and restraint as a means of behavioural management. This policy has been received with skepticism by youth counsellors who use these methods to contain disruptive behaviours. Insufficient training in mental health has precipitated feelings of inadequacy as they feel ill-equipped to do their jobs. Purpose: The purpose of my practicum is to develop a mental health learning module for youth counsellors to reduce seclusion and restraint in youth corrections. Methods: A literature search illustrated what is known on the topic of seclusion and restraint in youth corrections. Consultation with stakeholders revealed staff attitudes regarding the policy and its operational impact. An environmental scan revealed the availability of other resources intended to address disruptive behaviours. Conclusion: The learning module is focused on mental illnesses to increase youth counsellors’ competency in managing disruptive behaviours while minimizing the use of seclusion and restraint.
Resumo:
Background: Newfoundland and Labrador has a high incidence of type 1 diabetes and diabetic ketoacidosis (DKA) is a complication of type 1 diabetes. A clinical practice guideline was developed for the treatment of pediatric diabetic ketoacidosis (DKA) to standardize care in all Emergency Departments and improve patient outcomes. Rural emergency nurses are requires to maintain their competency and acquire new knowledge as stated by the Association of Registered Nurses of Newfoundland and Labrador (ARNNL). Purpose: The purpose of this practicum was to develop a self-learning module for rural emergency nurses to increase their knowledge and understanding of the clinical practise guideline to assess, treat, and prevent pediatric ketoacidosis. Methods: Two methodologies were used in this practicum. A review of the literature and consultations with key stakeholders were completed. Results: The self-learning module created was composed of three units and focused on the learning needs of rural emergency nurses in the areas of assessment, treatment, and prevention of pediatric DKA. Conclusion: The goal of the practicum was to increase rural emergency nurses’ knowledge and implementation of the clinical practice guideline when assessing and treating children and families experiencing DKA to improve patient outcomes. A planned evaluation of the self-learning module will be conducted following dissemination of the module throughout the rural Emergency Departments.
Resumo:
Funding — Forest Enterprise Scotland and the University of Aberdeen provided funding for the project. The Carnegie Trust supported the lead author, E. McHenry, in this research through the award of a tuition fees bursary.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Navigation devices used to be bulky and expensive and were not widely commercialized for personal use. Nowadays, all useful electronic devices are turning into being handheld so that they can be conveniently used anytime and anywhere. One can claim that almost any mobile phone, used today, has quite strong navigational capabilities that can efficiently work anywhere in the globe. No matter where you are, you can easily know your exact location and make your way smoothly to wherever you would like to go. This couldn’t have been made possible without the existence of efficient and small microwave circuits responsible for the transmission and reception of high quality navigation signals. This thesis is mainly concerned with the design of novel highly miniaturized and efficient filtering components working in the Global Navigational Satellite Systems (GNSS) frequency band to be integrated within an efficient Radio Frequency (RF) front-end module (FEM). A System-on-Package (SoP) integration technique is adopted for the design of all the components in this thesis. Two novel miniaturized filters are designed, where one of them is a wideband filter targeting the complete GNSS band with a fractional bandwidth of almost 50% at a center frequency of 1.385 GHz. This filter utilizes a direct inductive coupling topology to achieve the required wide band performance. It also has very good out-of-band rejection and low IL. Whereas the other dual band filter will only cover the lower and upper GNSS bands with a rejection notch in between the two bands. It has very good inter band rejection. The well-known “divide and conquer” design methodology was applied for the design of this filter to help save valuable design and optimization time. Moreover, the performance of two commercially available ultra-Low Noise Amplifiers (LNAs) is studied. The complete RF FEM showed promising preliminary performance in terms of noise figure, gain and bandwidth, where it out performed other commercial front-ends in these three aspects. All the designed circuits are fabricated and tested. The measured results are found to be in good agreements with the simulations.
Resumo:
The penetration of the electric vehicle (EV) has increased rapidly in recent years mainly as a consequence of advances in transport technology and power electronics and in response to global pressure to reduce carbon emissions and limit fossil fuel consumption. It is widely acknowledged that inappropriate provision and dispatch of EV charging can lead to negative impacts on power system infrastructure. This paper considers EV requirements and proposes a module which uses owner participation, through mobile phone apps and on-board diagnostics II (OBD-II), for scheduled vehicle charging. A multi-EV reference and single-EV real-time response (MRS2R) online algorithm is proposed to calculate the maximum and minimum adjustable limits of necessary capacity, which forms part of decision-making support in power system dispatch. The proposed EV dispatch module is evaluated in a case study and the influence of the mobile app, EV dispatch trending and commercial impact is explored.
Resumo:
The introduction of a poster presentation as a formative assessment method over a multiple choice examination after the first phase of a three phase “health and well-being” module in an undergraduate nursing degree programme was greeted with a storm of criticism from fellow lecturers stating that poster presentations are not valid or reliable and totally irrelevant to the assessment of learning in the module. This paper seeks to investigate these criticisms by investigating the literature regarding producing nurses fit for practice, nurse curriculum development and wider nurse education, the purpose of assessment, validity and reliability to critically evaluate the poster presentation as a legitimate assessment method for these aims.
Resumo:
Background
It is generally acknowledged that a functional understanding of a biological system can only be obtained by an understanding of the collective of molecular interactions in form of biological networks. Protein networks are one particular network type of special importance, because proteins form the functional base units of every biological cell. On a mesoscopic level of protein networks, modules are of significant importance because these building blocks may be the next elementary functional level above individual proteins allowing to gain insight into fundamental organizational principles of biological cells.
Results
In this paper, we provide a comparative analysis of five popular and four novel module detection algorithms. We study these module prediction methods for simulated benchmark networks as well as 10 biological protein interaction networks (PINs). A particular focus of our analysis is placed on the biological meaning of the predicted modules by utilizing the Gene Ontology (GO) database as gold standard for the definition of biological processes. Furthermore, we investigate the robustness of the results by perturbing the PINs simulating in this way our incomplete knowledge of protein networks.
Conclusions
Overall, our study reveals that there is a large heterogeneity among the different module prediction algorithms if one zooms-in the biological level of biological processes in the form of GO terms and all methods are severely affected by a slight perturbation of the networks. However, we also find pathways that are enriched in multiple modules, which could provide important information about the hierarchical organization of the system
Resumo:
The UFHRD Programme and Qualification Activities Committee awards an annual prize for the best contribution to the UFHRD Teaching & Learning Resource bank. Our teaching and learning resource is an overview of the placement module, including career coaching, that was created to enhance student employability.
Resumo:
Abstract not available