948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays microfluidic is becoming an important technology in many chemical and biological processes and analysis applications. The potential to replace large-scale conventional laboratory instrumentation with miniaturized and self-contained systems, (called lab-on-a-chip (LOC) or point-of-care-testing (POCT)), offers a variety of advantages such as low reagent consumption, faster analysis speeds, and the capability of operating in a massively parallel scale in order to achieve high-throughput. Micro-electro-mechanical-systems (MEMS) technologies enable both the fabrication of miniaturized system and the possibility of developing compact and portable systems. The work described in this dissertation is towards the development of micromachined separation devices for both high-speed gas chromatography (HSGC) and gravitational field-flow fractionation (GrFFF) using MEMS technologies. Concerning the HSGC, a complete platform of three MEMS-based GC core components (injector, separation column and detector) is designed, fabricated and characterized. The microinjector consists of a set of pneumatically driven microvalves, based on a polymeric actuating membrane. Experimental results demonstrate that the microinjector is able to guarantee low dead volumes, fast actuation time, a wide operating temperature range and high chemical inertness. The microcolumn consists of an all-silicon microcolumn having a nearly circular cross-section channel. The extensive characterization has produced separation performances very close to the theoretical ideal expectations. A thermal conductivity detector (TCD) is chosen as most proper detector to be miniaturized since the volume reduction of the detector chamber results in increased mass and reduced dead volumes. The microTDC shows a good sensitivity and a very wide dynamic range. Finally a feasibility study for miniaturizing a channel suited for GrFFF is performed. The proposed GrFFF microchannel is at early stage of development, but represents a first step for the realization of a highly portable and potentially low-cost POCT device for biomedical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this in vitro study was to assess the influence of varying examiner's clinical experience on the reproducibility and accuracy of radiographic examination for occlusal caries detection. Standardized bitewing radiographs were obtained from 166 permanent molars. Radiographic examination was performed by final-year dental students from two universities (A, n = 5; B, n = 5) and by dentists with 5 to 7 years of experience who work in two different countries (C, n = 5; D, n = 5). All examinations were repeated after 1-week interval. The teeth were histologically prepared and assessed for caries extension. For intraexaminer reproducibility, the unweighted kappa values were: A (0.11-0.40), B (0.12-0.33), C (0.47-0.58), and D (0.42-0.71). Interexaminer reproducibility statistics were computed based on means ± SD of unweighted kappa values: A (0.07 ± 0.05), B (0.12 ± 0.09), C (0.24 ± 0.08), and D (0.33 ± 0.10). Sensitivity, specificity, and accuracy were calculated at D(1) and D(3) thresholds and compared by performing McNemar test (p = 0.05). D(1) sensitivity ranged between 0.29 and 0.75 and specificity between 0.24 and 0.85. D(3) specificity was moderate to high (between 0.62 and 0.95) for all groups, with statistically significant difference between the dentists groups (C and D). Sensitivity was low to moderate (between 0.21 and 0.57) with statistically significant difference for groups B and D. Accuracy was similar for all groups (0.55). Spearman's correlations were: A (0.12), B (0.24), C (0.30), and D (0.38). In conclusion, the reproducibility of radiographic examination was influenced by the examiner's clinical experience, training, and dental education as well as the accuracy in detecting occlusal caries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a computationally efficient and biomechanically relevant soft-tissue simulation method for cranio-maxillofacial (CMF) surgery. A template-based facial muscle reconstruction was introduced to minimize the efforts on preparing a patient-specific model. A transversely isotropic mass-tensor model (MTM) was adopted to realize the effect of directional property of facial muscles in reasonable computation time. Additionally, sliding contact around teeth and mucosa was considered for more realistic simulation. Retrospective validation study with postoperative scan of a real patient showed that there were considerable improvements in simulation accuracy by incorporating template-based facial muscle anatomy and sliding contact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The purpose of our study was to retrospectively evaluate the specificity, sensitivity and accuracy of computed tomography (CT), digital radiography (DR) and low-dose linear slit digital radiography (LSDR, Lodox(®)) in the detection of internal cocaine containers. METHODS: Institutional review board approval was obtained. The study collectively consisted of 83 patients (76 males, 7 females, 16-45 years) suspected of having incorporated cocaine drug containers. All underwent radiological imaging; a total of 135 exams were performed: nCT=35, nDR=70, nLSDR=30. An overall calculation of all "drug mules" and a specific evaluation of body packers, pushers and stuffers were performed. The gold standard was stool examination in a dedicated holding cell equipped with a drug toilet. RESULTS: There were 54 drug mules identified in this study. CT of all drug carriers showed the highest diagnostic accuracy 97.1%, sensitivity 100% and specificity 94.1%. DR in all cases was 71.4% accurate, 58.3% sensitive and 85.3% specific. LSDR of all patients with internal cocaine was 60% accurate, 57.9% sensitive and 63.4% specific. CONCLUSIONS: CT was the most accurate test studied. Therefore, the detection of internal cocaine drug packs should be performed by CT, rather than by conventional X-ray, in order to apply the most sensitive exam in the medico-legal investigation of suspected drug carriers. Nevertheless, the higher radiation applied by CT than by DR or LSDR needs to be considered. Future studies should include evaluation of low dose CT protocols in order to address germane issues and to reduce dosage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among various groups of fishes, a shift in peak wavelength sensitivity has been correlated with changes in their photic environments. The genus Sebastes is a radiation of marine fish species that inhabit a wide range of depths from intertidal to over 600 m. We examined 32 species of Sebastes for evidence of adaptive amino acid substitution at the rhodopsin gene. Fourteen amino acid positions were variable among these species. Maximum likelihood analyses identify several of these to be targets of positive selection. None of these correspond to previously identified critical amino acid sites, yet they may in fact be functionally important. The occurrence of independent parallel changes at certain amino acid positions reinforces this idea. Reconstruction of habitat depths of ancestral nodes in the phylogeny suggests that shallow habitats have been colonized independently in different lineages. The evolution of rhodopsin appears to be associated with changes in depth, with accelerated evolution in lineages that have had large changes in depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives  To determine the diagnostic accuracy of World Health Organization (WHO) 2010 and 2006 as well as United States Department of Health and Human Services (DHHS) 2008 definitions of immunological failure for identifying virological failure (VF) in children on antiretroviral therapy (ART). Methods  Analysis of data from children (<16 years at ART initiation) at South African ART sites at which CD4 count/per cent and HIV-RNA monitoring are performed 6-monthly. Incomplete virological suppression (IVS) was defined as failure to achieve ≥1 HIV-RNA ≤400 copies/ml between 6 and 15 months on ART and viral rebound (VR) as confirmed HIV-RNA ≥5000 copies/ml in a child on ART for ≥18 months who had achieved suppression during the first year on treatment. Results  Among 3115 children [median (interquartile range) age 48 (20-84) months at ART initiation] on treatment for ≥1 year, sensitivity of immunological criteria for IVS was 10%, 6% and 26% for WHO 2006, WHO 2010 and DHHS 2008 criteria, respectively. The corresponding positive predictive values (PPV) were 31%, 20% and 20%. Diagnostic accuracy for VR was determined in 2513 children with ≥18 months of follow-up and virological suppression during the first year on ART with sensitivity of 5% (WHO 2006/2010) and 27% (DHHS 2008). PPV results were 42% (WHO 2010), 43% (WHO 2006) and 20% (DHHS 2008). Conclusion  Current immunological criteria are unable to correctly identify children failing ART virologically. Improved access to viral load testing is needed to reliably identify VF in children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluorescence microlymphography (FML) is used to visualize the lymphatic capillaries. A maximum spread of the fluorescence dye of ≥ 12 mm has been suggested for the diagnosis of lymphedema. However, data on sensitivity and specificity are lacking. The aim of this study was to investigate the accuracy of FML for diagnosing lymphedema in patients with leg swelling. Patients with lower extremity swelling were clinically assessed and separated into lymphedema and non-lymphatic edema groups. FML was studied in all affected legs and the maximum spread of lymphatic capillaries was measured. Test accuracy and receiver operator characteristic (ROC) analysis was performed to assess possible threshold values that predict lymphedema. Between March 2008 and August 2011 a total of 171 patients (184 legs) with a median age of 43.5 (IQR 24, 54) years were assessed. Of those, 94 (51.1%) legs were diagnosed with lymphedema. The sensitivity, specificity, positive and negative likelihood ratio and positive and negative predictive value were 87%, 64%, 2.45, 0.20, 72% and 83% for the 12-mm cut-off level and 79%, 83%, 4.72, 0.26, 83% and 79% for the 14-mm cut-off level, respectively. The area under the ROC curve was 0.82 (95% CI: 0.76, 0.88). Sensitivity was higher in the secondary versus primary lymphedema (95.0% vs 74.3%, p = 0.045). No major adverse events were observed. In conclusion, FML is a simple and safe technique for detecting lymphedema in patients with leg swelling. A cut-off level of ≥ 14-mm maximum spread has a high sensitivity and high specificity of detecting lymphedema and should be chosen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate co-alignment of the transmitter to the receiver of the BepiColombo Laser Altimeter is a challenging task for which an original alignment concept had to be developed. We present here the design, construction and testing of a large collimator facility built to fulfill the tight alignment requirements. We describe in detail the solution found to attenuate the high energy of the instrument laser transmitter by an original beam splitting pentaprism group. We list the different steps of the calibration of the alignment facility and estimate the errors made at each of these steps. We finally prove that the current facility is ready for the alignment of the flight instrument. Its angular accuracy is 23 μrad.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To determine sample sizes in studies on diagnostic accuracy and the proportion of studies that report calculations of sample size. DESIGN: Literature survey. DATA SOURCES: All issues of eight leading journals published in 2002. METHODS: Sample sizes, number of subgroup analyses, and how often studies reported calculations of sample size were extracted. RESULTS: 43 of 8999 articles were non-screening studies on diagnostic accuracy. The median sample size was 118 (interquartile range 71-350) and the median prevalence of the target condition was 43% (27-61%). The median number of patients with the target condition--needed to calculate a test's sensitivity--was 49 (28-91). The median number of patients without the target condition--needed to determine a test's specificity--was 76 (27-209). Two of the 43 studies (5%) reported a priori calculations of sample size. Twenty articles (47%) reported results for patient subgroups. The number of subgroups ranged from two to 19 (median four). No studies reported that sample size was calculated on the basis of preplanned analyses of subgroups. CONCLUSION: Few studies on diagnostic accuracy report considerations of sample size. The number of participants in most studies on diagnostic accuracy is probably too small to analyse variability of measures of accuracy across patient subgroups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine the accuracy of magnetic resonance imaging criteria for the early diagnosis of multiple sclerosis in patients with suspected disease. DESIGN: Systematic review. DATA SOURCES: 12 electronic databases, citation searches, and reference lists of included studies. Review methods Studies on accuracy of diagnosis that compared magnetic resonance imaging, or diagnostic criteria incorporating such imaging, to a reference standard for the diagnosis of multiple sclerosis. RESULTS: 29 studies (18 cohort studies, 11 other designs) were included. On average, studies of other designs (mainly diagnostic case-control studies) produced higher estimated diagnostic odds ratios than did cohort studies. Among 15 studies of higher methodological quality (cohort design, clinical follow-up as reference standard), those with longer follow-up produced higher estimates of specificity and lower estimates of sensitivity. Only two such studies followed patients for more than 10 years. Even in the presence of many lesions (> 10 or > 8), magnetic resonance imaging could not accurately rule multiple sclerosis in (likelihood ratio of a positive test result 3.0 and 2.0, respectively). Similarly, the absence of lesions was of limited utility in ruling out a diagnosis of multiple sclerosis (likelihood ratio of a negative test result 0.1 and 0.5). CONCLUSIONS: Many evaluations of the accuracy of magnetic resonance imaging for the early detection of multiple sclerosis have produced inflated estimates of test performance owing to methodological weaknesses. Use of magnetic resonance imaging to confirm multiple sclerosis on the basis of a single attack of neurological dysfunction may lead to over-diagnosis and over-treatment.