889 resultados para multi-mediational path model
Resumo:
The purpose of this paper is to introduce the concept of hydraulic damage and its numerical integration. Unlike the common phenomenological continuum damage mechanics approaches, the procedure introduced in this paper relies on mature concepts of homogenization, linear fracture mechanics, and thermodynamics. The model is applied to the problem of fault reactivation within resource reservoirs. The results show that propagation of weaknesses is highly driven by the contrasts of properties in porous media. In particular, it is affected by the fracture toughness of host rocks. Hydraulic damage is diffused when it takes place within extended geological units and localized at interfaces and faults.
Resumo:
Network coding is a method for achieving channel capacity in networks. The key idea is to allow network routers to linearly mix packets as they traverse the network so that recipients receive linear combinations of packets. Network coded systems are vulnerable to pollution attacks where a single malicious node floods the network with bad packets and prevents the receiver from decoding correctly. Cryptographic defenses to these problems are based on homomorphic signatures and MACs. These proposals, however, cannot handle mixing of packets from multiple sources, which is needed to achieve the full benefits of network coding. In this paper we address integrity of multi-source mixing. We propose a security model for this setting and provide a generic construction.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
The ability to understand and predict how thermal, hydrological,mechanical and chemical (THMC) processes interact is fundamental to many research initiatives and industrial applications. We present (1) a new Thermal– Hydrological–Mechanical–Chemical (THMC) coupling formulation, based on non-equilibrium thermodynamics; (2) show how THMC feedback is incorporated in the thermodynamic approach; (3) suggest a unifying thermodynamic framework for multi-scaling; and (4) formulate a new rationale for assessing upper and lower bounds of dissipation for THMC processes. The technique is based on deducing time and length scales suitable for separating processes using a macroscopic finite time thermodynamic approach. We show that if the time and length scales are suitably chosen, the calculation of entropic bounds can be used to describe three different types of material and process uncertainties: geometric uncertainties,stemming from the microstructure; process uncertainty, stemming from the correct derivation of the constitutive behavior; and uncertainties in time evolution, stemming from the path dependence of the time integration of the irreversible entropy production. Although the approach is specifically formulated here for THMC coupling we suggest that it has a much broader applicability. In a general sense it consists of finding the entropic bounds of the dissipation defined by the product of thermodynamic force times thermodynamic flux which in material sciences corresponds to generalized stress and generalized strain rates, respectively.
Resumo:
Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.
Resumo:
The contemporary default materials for multi-storey buildings – namely concrete and steel – are all significant generators of carbon and the use of timber products provides a technically, economically and environmentally viable alternative. In particular, timber’s sustainability can drive increased use and subsequent evolution of the Blue economy as a new economic model. National research to date, however, indicates a resistance to the uptake of timber technologies in Australia. To investigate this further, a preliminary study involving a convenience sample of 15 experts was conducted to identify the main barriers involved in the use of timber frames in multi-storey buildings. A closed-ended questionnaire survey involving 74 experienced construction industry participants was then undertaken to rate the relative importance of the barriers. The survey confirmed the most significant barriers to be a perceived increase in maintenance costs and fire risk, together with a limited awareness of the emerging timber technologies available. It is expected that the results will benefit government and the timber industry, contributing to environmental improvement by developing strategies to increase the use of timber technologies in multi-storey buildings by countering perceived barriers in the Australian context.
Resumo:
Objective To test a conceptual model linking parental physical activity orientations, parental support for physical activity, and children's self-efficacy perceptions with physical activity participation. Participants and setting The sample consisted of 380 students in grades 7 through 12 (mean age, 14.0±1.6 years) and their parents. Data collection took place during the fall of 1996. Main outcome measures Parents completed a questionnaire assessing their physical activity habits, enjoyment of physical activity, beliefs regarding the importance of physical activity, and supportive behaviors for their child's physical activity. Students completed a 46-item inventory assessing physical activity during the previous 7 days and a 5-item physical activity self-efficacy scale. The model was tested via observed variable path analysis using structural equation modeling techniques (AMOS 4.0). Results An initial model, in which parent physical activity orientations predicted child physical activity via parental support and child self-efficacy, did not provide an acceptable fit to the data. Inclusion of a direct path from parental support to child physical activity and deletion of a nonsignificant path from parental physical activity to child physical activity significantly improved model fit. Standardized path coefficients for the revised model ranged from 0.17 to 0.24, and all were significant at the p<0.0001 level. Conclusions Parental support was an important correlate of youth physical activity, acting directly or indirectly through its influence on self-efficacy. Physical activity interventions targeted at youth should include and evaluate the efficacy of individual-level and community-level strategies to increase parents’ capacity to provide instrumental and motivational support for their children's physical activity.
Resumo:
Karasek's Job Demand-Control model proposes that control mitigates the positive effects of work stressors on employee strain. Evidence to date remains mixed and, although a number of individual-level moderators have been examined, the role of broader, contextual, group factors has been largely overlooked. In this study, the extent to which control buffered or exacerbated the effects of demands on strain at the individual level was hypothesized to be influenced by perceptions of collective efficacy at the group level. Data from 544 employees in Australian organizations, nested within 23 workgroups, revealed significant three-way cross-level interactions among demands, control and collective efficacy on anxiety and job satisfaction. When the group perceived high levels of collective efficacy, high control buffered the negative consequences of high demands on anxiety and satisfaction. Conversely, when the group perceived low levels of collective efficacy, high control exacerbated the negative consequences of high demands on anxiety, but not satisfaction. In addition, a stress-exacerbating effect for high demands on anxiety and satisfaction was found when there was a mismatch between collective efficacy and control (i.e. combined high collective efficacy and low control). These results provide support for the notion that the stressor-strain relationship is moderated by both individual- and group-level factors.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
The validity of fatigue protocols involving multi-joint movements, such as stepping, has yet to be clearly defined. Although surface electromyography can monitor the fatigue state of individual muscles, the effects of joint angle and velocity variation on signal parameters are well established. Therefore, the aims of this study were to i) describe sagittal hip and knee kinematics during repetitive stepping ii) identify periods of high inter-trial variability and iii) determine within-test reliability of hip and knee kinematic profiles. A group of healthy men (N = 15) ascended and descended from a knee-high platform wearing a weighted vest (10%BW) for 50 consecutive trials. The hip and knee underwent rapid flexion and extension during step ascent and descent. Variability of hip and knee velocity peaked between 20-40% of the ascent phase and 80-100% of the descent. Significant (p<0.05) reductions in joint range of motion and peak velocity during step ascent were observed, while peak flexion velocity increased during descent. Healthy individuals use complex hip and knee motion to negotiate a knee-high step with kinematic patterns varying across multiple repetitions. These findings have important implications for future studies intending to use repetitive stepping as a fatigue model for the knee extensors and flexors.
Resumo:
Suspended loads on UAVs can provide significant benefits to several applications in agriculture, law enforcement and construction. The load impact on the underlying system dynamics should not be neglected as significant feedback forces may be induced on the vehicle during certain flight manoeuvres. Much research has focused on standard multi-rotor position and attitude control with and without a slung load. However, predictive control schemes, such as Nonlinear Model Predictive Control (NMPC), have not yet been fully explored. To this end, we present software and flight system architecture to test controller for safe and precise operation of multi-rotors with heavy slung load in three dimensions.
Resumo:
Background Paramedic education has evolved in recent times from vocational post-employment to tertiary pre-employment supplemented by clinical placement. Simulation is advocated as a means of transferring learned skills to clinical practice. Sole reliance of simulation learning using mannequin-based models may not be sufficient to prepare students for variance in human anatomy. In 2012, we trialled the use of fresh frozen human cadavers to supplement undergraduate paramedic procedural skill training. The purpose of this study is to evaluate whether cadaveric training is an effective adjunct to mannequin simulation and clinical placement. Methods A multi-method approach was adopted. The first step involved a Delphi methodology to formulate and validate the evaluation instrument. The instrument comprised of knowledge-based MCQs, Likert for self-evaluation of procedural skills and behaviours, and open answer. The second step involved a pre-post evaluation of the 2013 cadaveric training. Results One hundred and fourteen students attended the workshop and 96 evaluations were included in the analysis, representing a return rate of 84%. There was statistically significant improved anatomical knowledge after the workshop. Students' self-rated confidence in performing procedural skills on real patients improved significantly after the workshop: inserting laryngeal mask (MD 0.667), oropharyngeal (MD 0.198) and nasopharyngeal (MD 0.600) airways, performing Bag-Valve-Mask (MD 0.379), double (MD 0.344) and triple (MD 0.326,) airway manoeuvre, doing 12-lead electrocardiography (MD 0.729), using McGrath(R) laryngoscope (MD 0.726), using McGrath(R) forceps to remove foreign body (MD 0.632), attempting thoracocentesis (MD 1.240), and putting on a traction splint (MD 0.865). The students commented that the workshop provided context to their theoretical knowledge and that they gained an appreciation of the differences in normal tissue variation. Following engagement in/ completion of the workshop, students were more aware of their own clinical and non-clinical competencies. Conclusions The paramedic profession has evolved beyond patient transport with minimal intervention to providing comprehensive both emergency and non-emergency medical care. With limited availability of clinical placements for undergraduate paramedic training, there is an increasing demand on universities to provide suitable alternatives. Our findings suggested that cadaveric training using fresh frozen cadavers provides an effective adjunct to simulated learning and clinical placements.
Resumo:
These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.
Resumo:
We study the natural problem of secure n-party computation (in the passive, computationally unbounded attack model) of the n-product function f G (x 1,...,x n ) = x 1 ·x 2 ⋯ x n in an arbitrary finite group (G,·), where the input of party P i is x i ∈ G for i = 1,...,n. For flexibility, we are interested in protocols for f G which require only black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our results are as follows. First, on the negative side, we show that if (G,·) is non-abelian and n ≥ 4, then no ⌈n/2⌉-private protocol for computing f G exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for f G based on k-of-k threshold secret sharing schemes, which are efficiently implementable over any black-box group G. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance t < n/2, but has exponential communication complexity O(n*2t+1^2/t) group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance t < n/μ for a graph-related constant μ ≤ 2.948, and has efficient communication complexity O(n*t^2) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.
Resumo:
Outdoor robots such as planetary rovers must be able to navigate safely and reliably in order to successfully perform missions in remote or hostile environments. Mobility prediction is critical to achieving this goal due to the inherent control uncertainty faced by robots traversing natural terrain. We propose a novel algorithm for stochastic mobility prediction based on multi-output Gaussian process regression. Our algorithm considers the correlation between heading and distance uncertainty and provides a predictive model that can easily be exploited by motion planning algorithms. We evaluate our method experimentally and report results from over 30 trials in a Mars-analogue environment that demonstrate the effectiveness of our method and illustrate the importance of mobility prediction in navigating challenging terrain.