948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study is to provide a framework for future researchers to understand and use the FARSITE wildfire-forecasting model with data assimilation. Current wildfire models lack the ability to provide accurate prediction of fire front position faster than real-time. When FARSITE is coupled with a recursive ensemble filter, the data assimilation forecast method improves. The scope includes an explanation of the standalone FARSITE application, technical details on FARSITE integration with a parallel program coupler called OpenPALM, and a model demonstration of the FARSITE-Ensemble Kalman Filter software using the FireFlux I experiment by Craig Clements. The results show that the fire front forecast is improved with the proposed data-driven methodology than with the standalone FARSITE model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vocal differentiation is widely documented in birds and mammals but has been poorly investigated in other vertebrates, including fish, which represent the oldest extant vertebrate group. Neural circuitry controlling vocal behaviour is thought to have evolved from conserved brain areas that originated in fish, making this taxon key to understanding the evolution and development of the vertebrate vocal-auditory systems. This study examines ontogenetic changes in the vocal repertoire and whether vocal differentiation parallels auditory development in the Lusitanian toadfish Halobatrachus didactylus (Batrachoididae). This species exhibits a complex acoustic repertoire and is vocally active during early development. Vocalisations were recorded during social interactions for four size groups (fry: <2 cm; small juveniles: 2-4 cm; large juveniles: 5-7 cm; adults >25 cm, standard length). Auditory sensitivity of juveniles and adults was determined based on evoked potentials recorded from the inner ear saccule in response to pure tones of 75-945 Hz. We show an ontogenetic increment in the vocal repertoire from simple broadband-pulsed 'grunts' that later differentiate into four distinct vocalisations, including low-frequency amplitude-modulated 'boatwhistles'. Whereas fry emitted mostly single grunts, large juveniles exhibited vocalisations similar to the adult vocal repertoire. Saccular sensitivity revealed a three-fold enhancement at most frequencies tested from small to large juveniles; however, large juveniles were similar in sensitivity to adults. We provide the first clear evidence of ontogenetic vocal differentiation in fish, as previously described for higher vertebrates. Our results suggest a parallel development between the vocal motor pathway and the peripheral auditory system for acoustic social communication in fish.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI) data based on a standard General Linear Model (GLM) and spectral clustering was recently proposed as a means to alleviate the issues associated with spatial normalization in fMRI. However, for all its appeal, a GLM-based parcellation approach introduces its own biases, in the form of a priori knowledge about the shape of Hemodynamic Response Function (HRF) and task-related signal changes, or about the subject behaviour during the task. In this paper, we introduce a data-driven version of the spectral clustering parcellation, based on Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of the GLM. First, a number of independent components are automatically selected. Seed voxels are then obtained from the associated ICA maps and we compute the PLS latent variables between the fMRI signal of the seed voxels (which covers regional variations of the HRF) and the principal components of the signal across all voxels. Finally, we parcellate all subjects data with a spectral clustering of the PLS latent variables. We present results of the application of the proposed method on both single-subject and multi-subject fMRI datasets. Preliminary experimental results, evaluated with intra-parcel variance of GLM t-values and PLS derived t-values, indicate that this data-driven approach offers improvement in terms of parcellation accuracy over GLM based techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/aims: Few studies have validated the performance of guidelines for the prediction of choledocholithiasis (CL). Our objective was to prospectively assess the accuracy of the American Society for Gastrointestinal Endoscopy (ASGE) guidelines for the identification of CL. Methods: A two-year prospective evaluation of patients with suspected CL was performed. We evaluated the ASGE guidelines and its component variables in predicting CL. Results: A total of 256 patients with suspected CL were analyzed. Of the 208 patients with high-probability criteria for CL, 124 (59.6%) were found to have a stone/sludge at endoscopic retrograde cholangiopancreatography (ERCP). Among 48 patients with intermediate-probability criteria, 21 (43.8%) had a stone/sludge. The performance of ASGE high- and intermediate-probability criteria in our population had an accuracy of 59.0% (85.5% sensitivity, 24.3% specificity) and 41.0% (14.4% sensitivity, 75.6% specificity), respectively. The mean ERCP delay time was 6.1 days in the CL group and 6.4 days in the group without CL, p = 0.638. The presence of a common bile duct (CBD) > 6 mm (OR 2.21; 95% CI, 1.20-4.10), ascending cholangitis (OR 2.37; 95% CI, 1.01-5.55) and a CBD stone visualized on transabdominal US (OR 3.33; 95% CI, 1.48-7.52) were stronger predictors of CL. The occurrence of biliary pancreatitis was a strong protective factor for the presence of a retained CBD stone (OR 0.30; 95% CI, 0.17-0.55). Conclusions: Irrespective of a patient's ASGE probability for CL, the application of current guidelines in our population led to unnecessary performance of ERCPs in nearly half of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapid and efficient Dispersive Liquid–Liquid Microextraction (DLLME) followed by Laser-Induced Breakdown Spectroscopy detection (LIBS) was evaluated for simultaneous determination of Cr, Cu, Mn, Ni and Zn in water samples. Metals in the samples were extracted with tetrachloromethane as pyrrolidinedithiocarbamate (APDC) complexes, using vortex agitation to achieve dispersion of the extractant solvent. Several DLLME experimental factors affecting extraction efficiency were optimized with a multivariate approach. Under optimum DLLME conditions, DLLME-LIBS method was found to be of about 4.0–5.5 times more sensitive than LIBS, achieving limits of detection of about 3.7–5.6 times lower. To assess accuracy of the proposed DLLME-LIBS procedure, a certified reference material of estuarine water was analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The use of sagittal abdominal diameter (SAD) has been proposed for screening cardio-metabolic risk factors; however, its accuracy can be influenced by the choice of thresholds values. Aim: To determine the SAD threshold values for cardio-metabolic risk factors in Mexican adults; to assess whether parallel and serial SAD testing can improve waist circumference (WC) sensitivity and specificity; and to analyze the effect of considering SAD along with WC and body mass index (BMI) in detecting cardio-metabolic risk. Methods: This cross-sectional study was conducted during 2012-2014 in Northeast Mexico (n = 269). Data on anthropometric, clinical, and biochemical measurements were collected. Sex-adjusted receiver-operating characteristic curves (ROC) were obtained using hypertension, dysglycemia, dyslipidemia and insulin resistance as individual outcomes and metabolic syndrome as a composite outcome. Age-adjusted odds ratios and 95% confidence intervals (CI) were estimated using logistic regression. Results: The threshold value for SAD with acceptable combination of sensitivity and specificity was 24.6 cm in men and 22.5 cm in women. Parallel SAD testing improved WC sensitivity and serial testing improved WC specificity. The co-occurrence of high WC/high SAD increased the risk for insulin resistance by 2.4-fold (95% CI: 1.1-5.3), high BMI/high SAD by 4.3-fold (95% CI: 1.7-11.9) and SAD alone by 2.2-fold (95% CI: 1.2.-4.2). Conclusions: The use of SAD together with traditional obesity indices such as WC and BMI has advantages over using either of these indices alone. SAD may be a powerful screening tool for interventions for high-risk individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of a map is dependent on the reference dataset used in its construction. Classification analyses used in thematic mapping can, for example, be sensitive to a range of sampling and data quality concerns. With particular focus on the latter, the effects of reference data quality on land cover classifications from airborne thematic mapper data are explored. Variations in sampling intensity and effort are highlighted in a dataset that is widely used in mapping and modelling studies; these may need accounting for in analyses. The quality of the labelling in the reference dataset was also a key variable influencing mapping accuracy. Accuracy varied with the amount and nature of mislabelled training cases with the nature of the effects varying between classifiers. The largest impacts on accuracy occurred when mislabelling involved confusion between similar classes. Accuracy was also typically negatively related to the magnitude of mislabelled cases and the support vector machine (SVM), which has been claimed to be relatively insensitive to training data error, was the most sensitive of the set of classifiers investigated, with overall classification accuracy declining by 8% (significant at 95% level of confidence) with the use of a training set containing 20% mislabelled cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leishmaniasis, caused by Leishmania infantum, is a vector-borne zoonotic disease that is endemic to the Mediterranean basin. The potential of rabbits and hares to serve as competent reservoirs for the disease has recently been demonstrated, although assessment of the importance of their role on disease dynamics is hampered by the absence of quantitative knowledge on the accuracy of diagnostic techniques in these species. A Bayesian latent-class model was used here to estimate the sensitivity and specificity of the Immuno-fluorescence antibody test (IFAT) in serum and a Leishmania-nested PCR (Ln-PCR) in skin for samples collected from 217 rabbits and 70 hares from two different populations in the region of Madrid, Spain. A two-population model, assuming conditional independence between test results and incorporating prior information on the performance of the tests in other animal species obtained from the literature, was used. Two alternative cut-off values were assumed for the interpretation of the IFAT results: 1/50 for conservative and 1/25 for sensitive interpretation. Results suggest that sensitivity and specificity of the IFAT were around 70–80%, whereas the Ln-PCR was highly specific (96%) but had a limited sensitivity (28.9% applying the conservative interpretation and 21.3% with the sensitive one). Prevalence was higher in the rabbit population (50.5% and 72.6%, for the conservative and sensitive interpretation, respectively) than in hares (6.7% and 13.2%). Our results demonstrate that the IFAT may be a useful screening tool for diagnosis of leishmaniasis in rabbits and hares. These results will help to design and implement surveillance programmes in wild species, with the ultimate objective of early detecting and preventing incursions of the disease into domestic and human populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we present an important improvement in our model of biped mechanism that allows the elevation in a stable form of the system's feet during the execution of trajectories. This improvement allows for simpler trajectory planning and also facilitates the reduction of losses in the collision between the feet and the ground. On the other hand, we add to the design phase the study of the displacement of the Zero Moment Point, as well as the variation of the normal component of the ground reaction force during the motion of the system. Consideration of the above mentioned magnitudes in the design phase allows us to design the necessary support area of the system. These magnitudes will be used as a smoothness criterion of the ground contact to facilitate the selection of robot parameters and trajectories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is focused on improving the calibration accuracy of sub-millimeter astronomical observations. The wavelength range covered by observational radio astronomy has been extended to sub-millimeter and far infrared with the advancement of receiver technology in recent years. Sub-millimeter observations carried out with airborne and ground-based telescopes typically suffer from 10% to 90% attenuation of the astronomical source signals by the terrestrial atmosphere. The amount of attenuation can be derived from the measured brightness of the atmospheric emission. In order to do this, the knowledge of the atmospheric temperature and chemical composition, as well as the frequency-dependent optical depth at each place along the line of sight is required. The altitude-dependent air temperature and composition are estimated using a parametrized static atmospheric model, which is described in Chapter 2, because direct measurements are technically and financially infeasible. The frequency dependent optical depth of the atmosphere is computed with a radiative transfer model based on the theories of quantum mechanics and, in addition, some empirical formulae. The choice, application, and improvement of third party radiative transfer models are discussed in Chapter 3. The application of the calibration procedure, which is described in Chapter 4, to the astronomical data observed with the SubMillimeter Array Receiver for Two Frequencies (SMART), and the German REceiver for Astronomy at Terahertz Frequencies (GREAT), is presented in Chapters 5 and 6. The brightnesses of atmospheric emission were fitted consistently to the simultaneous multi-band observation data from GREAT at 1.2 ∼ 1.4 and 1.8 ∼ 1.9 THz with a single set of parameters of the static atmospheric model. On the other hand, the cause of the inconsistency between the model parameters fitted from the 490 and 810 GHz data of SMART is found to be the lack of calibration of the effective cold load temperature. Besides the correctness of atmospheric modeling, the stability of the receiver is also important to achieving optimal calibration accuracy. The stabilities of SMART and GREAT are analyzed with a special calibration procedure, namely the “load calibration". The effects of the drift and fluctuation of the receiver gain and noise temperature on calibration accuracy are discussed in Chapters 5 and 6. Alternative observing strategies are proposed to combat receiver instability. The methods and conclusions presented in this thesis are applicable to the atmospheric calibration of sub-millimeter astronomical observations up to at least 4.7 THz (the H channel frequency of GREAT) for observations carried out from ∼ 4 to 14 km altitude. The procedures for receiver gain calibration and stability test are applicable to other instruments using the same calibration approach as that for SMART and GREAT. The structure of the high performance, modular, and extensible calibration program used and further developed for this thesis work is presented in the Appendix C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroinflammation constitutes a major player in the etiopathology of neurodegenerative diseases (NDDs), by orchestrating several neurotoxic pathways which in concert lead to neurodegeneration. A positive feedback loop occurs between inflammation, microglia activation and misfolding processes that, alongside excitotoxicity and oxidative events, represent crucial features of this intricate scenario. The multi-layered nature of NDDs requires a deepen investigation on how these vicious cycles work. This could further help in the search for effective treatments. Electrophiles are critically involved in the modulation of a variety of neuroprotective responses. Thus, we envisioned their peculiar ability to switch on/off biological activities as a powerful tool for investigating the neurotoxic scenario driven by inflammation in NDDs. In particular, in this thesis project, we wanted to dissect at a molecular level the functional role of (pro)electrophilic moieties of previously synthesized thioesters of variously substituted trans-cinnamic acids, to identify crucial features which could interfere with amyloid aggregation as well as modulate Nrf2 and/or NF-κB activation. To this aim, we first synthesized new compounds to identify bioactive cores which could specifically modulate the intended target. Then, we systematically modified their structure to reach additional pathogenic pathways which could in tandem contribute to the inflammatory process. In particular, following the investigation of the mechanistic underpinnings involving the catechol feature in amyloid binding through the synthesis of new dihydroxyl derivatives, we incorporated the identified antiaggregating nucleus into constrained frames which could contrast neuroinflammation also through the modulation of CB2Rs. In parallel, Nrf2 and/or NF-κB antinflammatory structural requirements were combined with the neuroprotective cores of pioglitazone, an antidiabetic drug endowed with MAO-B inhibitory properties, and memantine, which notably contrasts excitotoxicity. By acting as Swiss army knives, the new set of molecules emerge as promising tools to deepen our insights into the complex scenario regulating NDDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.