996 resultados para Optimal tests
Resumo:
We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.
Resumo:
LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel section produced using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. The LSBs were commonly used as floor joists and bearers with web openings in residential, industrial and commercial buildings. Due to the unique geometry of LSBs, as well as its unique residual stress characteristics and initial geometric imperfections resultant of manufacturing processes, much of the existing research for common cold-formed steel sections is not directly applicable to LSBs. Many research studies have been carried out to evaluate the behaviour and design of LSBs subject to pure bending actions, predominant shear and combined actions. However, to date, no investigation has been conducted into the web crippling behaviour and strength of LSB sections. Hence detailed experimental studies were conducted to investigate the web crippling behaviour and strengths of LSBs under EOF (End One Flange) and IOF (Interior One Flange) load cases. A total of 26 web crippling tests was conducted and the results were compared with current AS/NZS 4600 design rules. This comparison showed that AS/NZS 4600 (SA, 2005) design rules are very conservative for LSB sections under EOF and IOF load cases. Suitable design equations have been proposed to determine the web crippling capacity of LSBs based on experimental results. This paper presents the details of this experimental study on the web crippling behaviour and strengths of LiteSteel beams under EOF and IOF load cases.
Resumo:
This paper presents the details of an experimental study of a cold-formed steel hollow flange channel beam known as LiteSteel Beam (LSB) subject to web crippling actions (ETF and ITF). Due to the geometry of the LSB, as well as its unique residual stress characteristics and initial geometric imperfections resultant of manufacturing processes, much of the existing research for common cold-formed steel sections is not directly applicable to LSB. Experimental and numerical studies have been carried out to evaluate the behaviour and design of LSBs subject to pure bending actions, predominant shear actions and combined actions. To date, however, no investigation has been conducted into the web crippling behaviour and strength of LSB sections under ETF and ITF load conditions. Hence experimental studies were conducted to assess the web crippling behaviour and strengths of LSBs. Twenty eight web crippling tests were conducted and the results were compared with the current AS/NZS 4600[1] and AISI S100 [2]design equations. Comparison of the ultimate web crippling capacities from tests showed that AS/NZS 4600[1] and AISI S100 [2] design equations are unconservative for LSB sections under ETF and ITF load cases. Hence new equations were proposed to determine the web crippling capacities of LSBs. Suitable design rules were also developed under the DSM format.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
The aim of this study was to examine the reliability and validity of field tests for assessing physical function in mid-aged and young-old people (55-70 y). Tests were selected that required minimal space and equipment and could be implemented in multiple field settings such as a general practitioner's office. Nineteen participants completed 2 field and I laboratory testing sessions. Intra-class correlations showed good reliability for the tests of upper body strength (lift and reach, R=.66), lower body strength (sit to stand, R=.80) and functional capacity (Canadian Step Test, R=.92), but not for leg power (single timed chair rise, R=.28). There was also good reliability for the balance test during 3 stances: parallel (94.7% agreement), semi-tandem (73.7%), and tandem (52.6%). Comparison of field test results with objective laboratory measures found good validity for the sit to stand (cf 1RM leg press, Pearson r=.68, p <.05), and for the step test (cf PWC140, r = -.60, p <.001), but not for the lift and reach (cf 1RM bench press, r=.43, p >.05), balance (r=-.13, -.18, .23) and rate of force development tests (r=-.28). It was concluded that the lower body strength and cardiovascular function tests were appropriate for use in field settings with mid-aged and young-old adults.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
A plasma-assisted concurrent Rf sputtering technique for fabrication of biocompatible, functionally graded CaP-based interlayer on Ti-6Al-4V orthopedic alloy is reported. Each layer in the coating is designed to meet a specific functionality. The adherent to the metal layer features elevated content of Ti and supports excellent ceramic-metal interfacial stability. The middle layer features nanocrystalline structure and mimics natural bone apatites. The technique allows one to reproduce Ca/P ratios intrinsic to major natural calcium phosphates. Surface morphology of the outer, a few to few tens of nanometers thick, layer, has been tailored to fit the requirements for the bio-molecule/protein attachment factors. Various material and surface characterization techniques confirm that the optimal surface morphology of the outer layer is achieved for the process conditions yielding nanocrystalline structure of the middle layer. Preliminary cell culturing tests confirm the link between the tailored nano-scale surface morphology, parameters of the middle nanostructured layer, and overall biocompatibility of the coating.
Resumo:
Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.
Resumo:
The efficiency of the nitrogen (N) application rates 0, 120, 180 and 240 kg N ha−1 in combination with low or medium water levels in the cultivation of winter wheat (Triticum aestivum L.) cv. Kupava was studied for the 2005–2006 and 2006–2007 growing seasons in the Khorezm region of Uzbekistan. The results show an impact of the initial soil Nmin (NO3-N + NH4-N) levels measured at wheat seeding on the N fertilizer rates applied. When the Nmin content in the 0–50 cm soil layer was lower than 10 mg kg−1 during wheat seeding in 2005, the N rate of 180 kg ha−1 was found to be the most effective for achieving high grain yields of high quality. With a higher Nmin content of about 30 mg kg−1 as was the case in the 2006 season, 120 kg N ha−1 was determined as being the technical and economical optimum. The temporal course of N2O emissions of winter wheat cultivation for the two water-level studies shows that emissions were strongly influenced by irrigation and N-fertilization. Extremely high emissions were measured immediately after fertilizer application events that were combined with irrigation events. Given the high impact of N-fertilizer and irrigation-water management on N2O emissions, it can be concluded that present N-management practices should be modified to mitigate emissions of N2O and to achieve higher fertilizer use efficiency.
Resumo:
Background: Malaria rapid diagnostic tests (RDTs) are increasingly used by remote health personnel with minimal training in laboratory techniques. RDTs must, therefore, be as simple, safe and reliable as possible. Transfer of blood from the patient to the RDT is critical to safety and accuracy, and poses a significant challenge to many users. Blood transfer devices were evaluated for accuracy and precision of volume transferred, safety and ease of use, to identify the most appropriate devices for use with RDTs in routine clinical care. Methods: Five devices, a loop, straw-pipette, calibrated pipette, glass capillary tube, and a new inverted cup device, were evaluated in Nigeria, the Philippines and Uganda. The 227 participating health workers used each device to transfer blood from a simulated finger-prick site to filter paper. For each transfer, the number of attempts required to collect and deposit blood and any spilling of blood during transfer were recorded. Perceptions of ease of use and safety of each device were recorded for each participant. Blood volume transferred was calculated from the area of blood spots deposited on filter paper. Results: The overall mean volumes transferred by devices differed significantly from the target volume of 5 microliters (p < 0.001). The inverted cup (4.6 microliters) most closely approximated the target volume. The glass capillary was excluded from volume analysis as the estimation method used is not compatible with this device. The calibrated pipette accounted for the largest proportion of blood exposures (23/225, 10%); exposures ranged from 2% to 6% for the other four devices. The inverted cup was considered easiest to use in blood collection (206/ 226, 91%); the straw-pipette and calibrated pipette were rated lowest (143/225 [64%] and 135/225 [60%] respectively). Overall, the inverted cup was the most preferred device (72%, 163/227), followed by the loop (61%, 138/227). Conclusions: The performance of blood transfer devices varied in this evaluation of accuracy, blood safety, ease of use, and user preference. The inverted cup design achieved the highest overall performance, while the loop also performed well. These findings have relevance for any point-of-care diagnostics that require blood sampling.
Resumo:
Background Accurate diagnosis is essential for prompt and appropriate treatment of malaria. While rapid diagnostic tests (RDTs) offer great potential to improve malaria diagnosis, the sensitivity of RDTs has been reported to be highly variable. One possible factor contributing to variable test performance is the diversity of parasite antigens. This is of particular concern for Plasmodium falciparum histidine-rich protein 2 (PfHRP2)-detecting RDTs since PfHRP2 has been reported to be highly variable in isolates of the Asia-Pacific region. Methods The pfhrp2 exon 2 fragment from 458 isolates of P. falciparum collected from 38 countries was amplified and sequenced. For a subset of 80 isolates, the exon 2 fragment of histidine-rich protein 3 (pfhrp3) was also amplified and sequenced. DNA sequence and statistical analysis of the variation observed in these genes was conducted. The potential impact of the pfhrp2 variation on RDT detection rates was examined by analysing the relationship between sequence characteristics of this gene and the results of the WHO product testing of malaria RDTs: Round 1 (2008), for 34 PfHRP2-detecting RDTs. Results Sequence analysis revealed extensive variations in the number and arrangement of various repeats encoded by the genes in parasite populations world-wide. However, no statistically robust correlation between gene structure and RDT detection rate for P. falciparum parasites at 200 parasites per microlitre was identified. Conclusions The results suggest that despite extreme sequence variation, diversity of PfHRP2 does not appear to be a major cause of RDT sensitivity variation.