957 resultados para multi-factor
Resumo:
Hospital disaster resilience can be defined as “the ability of hospitals to resist, absorb, and respond to the shock of disasters while maintaining and surging essential health services, and then to recover to its original state or adapt to a new one.” This article aims to provide a framework which can be used to comprehensively measure hospital disaster resilience. An evaluation framework for assessing hospital resilience was initially proposed through a systematic literature review and Modified-Delphi consultation. Eight key domains were identified: hospital safety, command, communication and cooperation system, disaster plan, resource stockpile, staff capability, disaster training and drills, emergency services and surge capability, and recovery and adaptation. The data for this study were collected from 41 tertiary hospitals in Shandong Province in China, using a specially designed questionnaire. Factor analysis was conducted to determine the underpinning structure of the framework. It identified a four-factor structure of hospital resilience, namely, emergency medical response capability (F1), disaster management mechanisms (F2), hospital infrastructural safety (F3), and disaster resources (F4). These factors displayed good internal consistency. The overall level of hospital disaster resilience (F) was calculated using the scoring model: F = 0.615F1 + 0.202F2 + 0.103F3 + 0.080F4. This validated framework provides a new way to operationalise the concept of hospital resilience, and it is also a foundation for the further development of the measurement instrument in future studies.
Resumo:
This paper treats one particular version of the multi-utility strategy as experienced by the Hyder Group. We examine some aspectw of the company's financial performance and consider the implications.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
The aim of this study was to evaluate the factor structure of the Baby Eating Behaviour Questionnaire (BEBQ) in an Australian community sample of mother-infant dyads. A secondary aim was to explore the relationship between the BEBQ subscales and infant gender, weight and current feeding mode. Confirmatory factor analysis (CFA) utilising structural equation modelling examined the hypothesised 4-factor model of the BEBQ. Only mothers (N=467) who completed all items on the BEBQ (infant age: M=17 weeks, SD=3 weeks) were included in the analysis. The original 4-factor model did not provide an acceptable fit to the data due to poor performance of the Satiety responsiveness factor. Removal of this factor (3 items) resulted in a well-fitting 3-factor model. Cronbach’s α was acceptable for the Enjoyment of food (α=0.73), Food responsiveness (α=0.78) and Slowness in eating (α=0.68) subscales but low for the Satiety responsiveness (α=0.56) subscale. Enjoyment of food was associated with higher infant weight whereas Slowness in eating and Satiety responsiveness were both associated with lower infant weight. Differences on all four subscales as a function of feeding mode were observed. This study is the first to use CFA to evaluate the hypothesised factor structure of the BEBQ. Findings support further development work on the Satiety responsiveness subscale in particular, but confirm the utility of the Enjoyment of food, Food responsiveness and Slowness in eating subscales.
Resumo:
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales, a process which combines the spatial specificity of the finest spatial map with the consensus provided by broader mapping scales. Experiments on three real-world datasets including a large robotics benchmark demonstrate that mapping over multiple scales uniformly improves place recognition performance over a single scale approach without sacrificing localization accuracy. We present analysis that illustrates how matching over multiple scales leads to better place recognition performance and discuss several promising areas for future investigation.
Resumo:
In this paper we introduce a new technique to obtain the slow-motion dynamics in nonequilibrium and singularly perturbed problems characterized by multiple scales. Our method is based on a straightforward asymptotic reduction of the order of the governing differential equation and leads to amplitude equations that describe the slowly-varying envelope variation of a uniformly valid asymptotic expansion. This may constitute a simpler and in certain cases a more general approach toward the derivation of asymptotic expansions, compared to other mainstream methods such as the method of Multiple Scales or Matched Asymptotic expansions because of its relation with the Renormalization Group. We illustrate our method with a number of singularly perturbed problems for ordinary and partial differential equations and recover certain results from the literature as special cases. © 2010 - IOS Press and the authors. All rights reserved.
Resumo:
A dynamic accumulator is an algorithm, which merges a large set of elements into a constant-size value such that for an element accumulated, there is a witness confirming that the element was included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set. Recently Wang et al. presented a dynamic accumulator for batch updates at ICICS 2007. However, their construction suffers from two serious problems. We analyze them and propose a way to repair their scheme. We use the accumulator to construct a new scheme for common secure indices with conjunctive keyword-based retrieval.
Resumo:
We report on the comparative study of magnetotransport properties of large-area vertical few-layer graphene networks with different morphologies, measured in a strong (up to 10 T) magnetic field over a wide temperature range. The petal-like and tree-like graphene networks grown by a plasma enhanced CVD process on a thin (500 nm) silicon oxide layer supported by a silicon wafer demonstrate a significant difference in the resistance-magnetic field dependencies at temperatures ranging from 2 to 200 K. This behaviour is explained in terms of the effect of electron scattering at ultra-long reactive edges and ultra-dense boundaries of the graphene nanowalls. Our results pave a way towards three-dimensional vertical graphene-based magnetoelectronic nanodevices with morphology-tuneable anisotropic magnetic properties. © The Royal Society of Chemistry 2013.
Resumo:
Hepatocellular carcinoma (HCC) is one of the primary hepatic malignancies and is the third most common cause of cancer related death worldwide. Although a wealth of knowledge has been gained concerning the initiation and progression of HCC over the last half century, efforts to improve our understanding of its pathogenesis at a molecular level are still greatly needed, to enable clinicians to enhance the standards of the current diagnosis and treatment of HCC. In the post-genome era, advanced mass spectrometry driven multi-omics technologies (e.g., profiling of DNA damage adducts, RNA modification profiling, proteomics, and metabolomics) stand at the interface between chemistry and biology, and have yielded valuable outcomes from the study of a diversity of complicated diseases. Particularly, these technologies are being broadly used to dissect various biological aspects of HCC with the purpose of biomarker discovery, interrogating pathogenesis as well as for therapeutic discovery. This proof of knowledge-based critical review aims at exploring the selected applications of those defined omics technologies in the HCC niche with an emphasis on translational applications driven by advanced mass spectrometry, toward the specific clinical use for HCC patients. This approach will enable the biomedical community, through both basic research and the clinical sciences, to enhance the applicability of mass spectrometry-based omics technologies in dissecting the pathogenesis of HCC and could lead to novel therapeutic discoveries for HCC.
Resumo:
This paper describes research investigating expertise and the types of knowledge used by airport security screeners. It applies a multi method approach incorporating eye tracking, concurrent verbal protocol and interviews. Results show that novice and expert security screeners primarily access perceptual knowledge and experience little difficulty during routine situations. During non-routine situations however, experience was found to be a determining factor for effective interactions and problem solving. Experts were found to use strategic knowledge and demonstrated structured use of interface functions integrated into efficient problem solving sequences. Comparatively, novices experienced more knowledge limitations and uncertainty resulting in interaction breakdowns. These breakdowns were characterised by trial and error interaction sequences. This research suggests that the quality of knowledge security screeners have access to has implications on visual and physical interface interactions and their integration into problem solving sequences. Implications and recommendations for the design of interfaces used in the airport security screening context are discussed. The motivations of recommendations are to improve the integration of interactions into problem solving sequences, encourage development of problem scheme knowledge and to support the skills and knowledge of the personnel that interact with security screening systems.
Resumo:
Recently, a variety high-aspect-ratio nanostructures have been grown and profiled for various applications ranging from field emission transistors to gene/drug delivery devices. However, fabricating and processing arrays of these structures and determining how changing certain physical parameters affects the final outcome is quite challenging. We have developed several modules that can be used to simulate the processes of various physical vapour deposition systems from precursor interaction in the gas phase to gas-surface interactions and surface processes. In this paper, multi-scale hybrid numerical simulations are used to study how low-temperature non-equilibrium plasmas can be employed in the processing of high-aspect-ratio structures such that the resulting nanostructures have properties suitable for their eventual device application. We show that whilst using plasma techniques is beneficial in many nanofabrication processes, it is especially useful in making dense arrays of high-aspect-ratio nanostructures.
Resumo:
Multi-party key agreement protocols indirectly assume that each principal equally contributes to the final form of the key. In this paper we consider three malleability attacks on multi-party key agreement protocols. The first attack, called strong key control allows a dishonest principal (or a group of principals) to fix the key to a pre-set value. The second attack is weak key control in which the key is still random, but the set from which the key is drawn is much smaller than expected. The third attack is named selective key control in which a dishonest principal (or a group of dishonest principals) is able to remove a contribution of honest principals to the group key. The paper discusses the above three attacks on several key agreement protocols, including DH (Diffie-Hellman), BD (Burmester-Desmedt) and JV (Just-Vaudenay). We show that dishonest principals in all three protocols can weakly control the key, and the only protocol which does not allow for strong key control is the DH protocol. The BD and JV protocols permit to modify the group key by any pair of neighboring principals. This modification remains undetected by honest principals.
Resumo:
We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.
Resumo:
We study the natural problem of secure n-party computation (in the passive, computationally unbounded attack model) of the n-product function f G (x 1,...,x n ) = x 1 ·x 2 ⋯ x n in an arbitrary finite group (G,·), where the input of party P i is x i ∈ G for i = 1,...,n. For flexibility, we are interested in protocols for f G which require only black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our results are as follows. First, on the negative side, we show that if (G,·) is non-abelian and n ≥ 4, then no ⌈n/2⌉-private protocol for computing f G exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for f G based on k-of-k threshold secret sharing schemes, which are efficiently implementable over any black-box group G. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance t < n/2, but has exponential communication complexity O(n*2t+1^2/t) group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance t < n/μ for a graph-related constant μ ≤ 2.948, and has efficient communication complexity O(n*t^2) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.
Resumo:
In this paper a novel controller for stable and precise operation of multi-rotors with heavy slung loads is introduced. First, simplified equations of motions for the multi-rotor and slung load are derived. The model is then used to design a Nonlinear Model Predictive Controller (NMPC) that can manage the highly nonlinear dynamics whilst accounting for system constraints. The controller is shown to simultaneously track specified waypoints whilst actively damping large slung load oscillations. A Linear-quadratic regulator (LQR) controller is also derived, and control performance is compared in simulation. Results show the improved performance of the Nonlinear Model Predictive Control (NMPC) controller over a larger flight envelope, including aggressive maneuvers and large slung load displacements. Computational cost remains relatively small, amenable to practical implementation. Such systems for small Unmanned Aerial Vehicles (UAVs) may provide significant benefit to several applications in agriculture, law enforcement and construction.