980 resultados para Drop Test Equipment.
Resumo:
info:eu-repo/semantics/nonPublished
Resumo:
Congrès du GIRSO, Lille, avril 2011
Resumo:
Editorial
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Timing-related defects are major contributors to test escapes and in-field reliability problems for very-deep submicrometer integrated circuits. Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. We present a test-grading technique that uses the method of output deviations for screening small-delay defects (SDDs). A new gate-delay defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from an n-detect pattern set generated using timing-unaware automatic test-pattern generation (ATPG). It offers significantly lower computational complexity and excites a larger number of long paths compared to a current generation commercial timing-aware ATPG tool. Our results also show that, for the same pattern count, the selected patterns provide more effective coverage ramp-up than timing-aware ATPG and a recent pattern-selection method for random SDDs potentially caused by resistive shorts, resistive opens, and process variations. © 2010 IEEE.
Resumo:
BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.
Resumo:
Gemstone Team HEAT (Human Energy Acquisition Technology)
Improving the lens design and performance of a contemporary electromagnetic shock wave lithotripter.
Resumo:
The efficiency of shock wave lithotripsy (SWL), a noninvasive first-line therapy for millions of nephrolithiasis patients, has not improved substantially in the past two decades, especially in regard to stone clearance. Here, we report a new acoustic lens design for a contemporary electromagnetic (EM) shock wave lithotripter, based on recently acquired knowledge of the key lithotripter field characteristics that correlate with efficient and safe SWL. The new lens design addresses concomitantly three fundamental drawbacks in EM lithotripters, namely, narrow focal width, nonidealized pulse profile, and significant misalignment in acoustic focus and cavitation activities with the target stone at high output settings. Key design features and performance of the new lens were evaluated using model calculations and experimental measurements against the original lens under comparable acoustic pulse energy (E+) of 40 mJ. The -6-dB focal width of the new lens was enhanced from 7.4 to 11 mm at this energy level, and peak pressure (41 MPa) and maximum cavitation activity were both realigned to be within 5 mm of the lithotripter focus. Stone comminution produced by the new lens was either statistically improved or similar to that of the original lens under various in vitro test conditions and was significantly improved in vivo in a swine model (89% vs. 54%, P = 0.01), and tissue injury was minimal using a clinical treatment protocol. The general principle and associated techniques described in this work can be applied to design improvement of all EM lithotripters.
Resumo:
Background: Acute febrile respiratory illnesses, including influenza, account for a large proportion of ambulatory care visits worldwide. In the developed world, these encounters commonly result in unwarranted antibiotic prescriptions; data from more resource-limited settings are lacking. The purpose of this study was to describe the epidemiology of influenza among outpatients in southern Sri Lanka and to determine if access to rapid influenza test results was associated with decreased antibiotic prescriptions.
Methods: In this pretest- posttest study, consecutive patients presenting from March 2013- April 2014 to the Outpatient Department of the largest tertiary care hospital in southern Sri Lanka were surveyed for influenza-like illness (ILI). Patients meeting World Health Organization criteria for ILI-- acute onset of fever ≥38.0°C and cough in the prior 7 days--were enrolled. Consenting patients were administered a structured questionnaire, physical examination, and nasal/nasopharyngeal sampling. Rapid influenza A/B testing (Veritor System, Becton Dickinson) was performed on all patients, but test results were only released to patients and clinicians during the second phase of the study (December 2013- April 2014).
Results: We enrolled 397 patients with ILI, with 217 (54.7%) adults ≥12 years and 188 (47.4%) females. A total of 179 (45.8%) tested positive for influenza by rapid testing, with April- July 2013 and September- November 2013 being the periods with the highest proportion of ILI due to influenza. A total of 310 (78.1%) patients with ILI received a prescription for an antibiotic from their outpatient provider. The proportion of patients prescribed antibiotics decreased from 81.4% in the first phase to 66.3% in the second phase (p=.005); among rapid influenza-positive patients, antibiotic prescriptions decreased from 83.7% in the first phase to 56.3% in the second phase (p=.001). On multivariable analysis, having a positive rapid influenza test available to clinicians was associated with decreased antibiotic use (OR 0.20, 95% CI 0.05- 0.82).
Conclusions: Influenza virus accounted for almost 50% of acute febrile respiratory illness in this study, but most patients were prescribed antibiotics. Providing rapid influenza test results to clinicians was associated with fewer antibiotic prescriptions, but overall prescription of antibiotics remained high. In this developing country setting, a multi-faceted approach that includes improved access to rapid diagnostic tests may help decrease antibiotic use and combat antimicrobial resistance.
Resumo:
BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.
Resumo:
Association studies of quantitative traits have often relied on methods in which a normal distribution of the trait is assumed. However, quantitative phenotypes from complex human diseases are often censored, highly skewed, or contaminated with outlying values. We recently developed a rank-based association method that takes into account censoring and makes no distributional assumptions about the trait. In this study, we applied our new method to age-at-onset data on ALDX1 and ALDX2. Both traits are highly skewed (skewness > 1.9) and often censored. We performed a whole genome association study of age at onset of the ALDX1 trait using Illumina single-nucleotide polymorphisms. Only slightly more than 5% of markers were significant. However, we identified two regions on chromosomes 14 and 15, which each have at least four significant markers clustering together. These two regions may harbor genes that regulate age at onset of ALDX1 and ALDX2. Future fine mapping of these two regions with densely spaced markers is warranted.
Resumo:
info:eu-repo/semantics/published
Resumo:
The screening and treatment of latent tuberculosis (TB) infection reduces the risk of progression to active disease and is currently recommended for HIV-infected patients. The aim of this study is to evaluate, in a low TB incidence setting, the potential contribution of an interferon-gamma release assay in response to the mycobacterial latency antigen Heparin-Binding Haemagglutinin (HBHA-IGRA), to the detection of Mycobacterium tuberculosis infection in HIV-infected patients.
Resumo:
En este trabajo presentamos un análisis estadístico del Test de Conocimientos Previos de Matemáticas (TCPM) diseñado para medir el estado inicial de destrezas y conocimientos básicos en matemáticas de los alumnos ingresantes a carreras científico-tecnológicas de la Facultad de Ciencias Físico, Matemáticas y Naturales de la Universidad Nacional de San Luis. El objetivo de la investigación está centrado en observar el diagnóstico utilizado, con miras a una eventual utilización posterior. Para determinar la bondad de la prueba realizamos un análisis de la calidad, discriminación e índice de dificultad de los ítems, así como de la validez y confiabilidad del diagnóstico, para este análisis estadístico empleamos los programas TestGraf y SPSS. El test se aplicó a 698 estudiantes ingresantes a la Universidad en el ciclo lectivo 2002. De la investigación pudimos inferir que el diagnóstico resultó: difícil para la población de aplicación; de confiabilidad aceptable, y de buena calidad de items, con variada dificultad y aceptable discriminación.
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.