971 resultados para Non-linear Response
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
The response of atmospheric chemistry and dynamics to volcanic eruptions and to a decrease in solar activity during the Dalton Minimum is investigated with the fully coupled atmosphere–ocean chemistry general circulation model SOCOL-MPIOM (modeling tools for studies of SOlar Climate Ozone Links-Max Planck Institute Ocean Model) covering the time period 1780 to 1840 AD. We carried out several sensitivity ensemble experiments to separate the effects of (i) reduced solar ultra-violet (UV) irradiance, (ii) reduced solar visible and near infrared irradiance, (iii) enhanced galactic cosmic ray intensity as well as less intensive solar energetic proton events and auroral electron precipitation, and (iv) volcanic aerosols. The introduced changes of UV irradiance and volcanic aerosols significantly influence stratospheric dynamics in the early 19th century, whereas changes in the visible part of the spectrum and energetic particles have smaller effects. A reduction of UV irradiance by 15%, which represents the presently discussed highest estimate of UV irradiance change caused by solar activity changes, causes global ozone decrease below the stratopause reaching as much as 8% in the midlatitudes at 5 hPa and a significant stratospheric cooling of up to 2 °C in the mid-stratosphere and to 6 °C in the lower mesosphere. Changes in energetic particle precipitation lead only to minor changes in the yearly averaged temperature fields in the stratosphere. Volcanic aerosols heat the tropical lower stratosphere, allowing more water vapour to enter the tropical stratosphere, which, via HOx reactions, decreases upper stratospheric and mesospheric ozone by roughly 4%. Conversely, heterogeneous chemistry on aerosols reduces stratospheric NOx, leading to a 12% ozone increase in the tropics, whereas a decrease in ozone of up to 5% is found over Antarctica in boreal winter. The linear superposition of the different contributions is not equivalent to the response obtained in a simulation when all forcing factors are applied during the Dalton Minimum (DM) – this effect is especially well visible for NOx/NOy. Thus, this study also shows the non-linear behaviour of the coupled chemistry-climate system. Finally, we conclude that especially UV and volcanic eruptions dominate the changes in the ozone, temperature and dynamics while the NOx field is dominated by the energetic particle precipitation. Visible radiation changes have only very minor effects on both stratospheric dynamics and chemistry.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
In terms of changing flow and sediment regimes of rivers, dams are often regarded as the most dominant form of human impact on fluvial systems. Dams can decrease the flux of water and sediments leading to channel changes such as upstream aggradation and downstream degradation. The opposite effects occur when dams are removed. Channel degradation often requires further intervention in terms of river bed and bank protection works. The situation evolves more complex in river systems that are impacted by a series of dams due to feedback processes between the different system compartments. A number of studies have recently investigated geomorphic systems using connectivity approaches to improve the understanding of geomorphic system response to change. This paper presents a case study investigating the impact of dam construction, dam removal and dam-related river bed and bank protection measures on the sediment connectivity and channel morphology of the Fugnitz and the Kaja Rivers using a combination of DEM analyses, field surveys and landscape evolution modelling. For both river systems the results revealed low sediment connectivity accompanied by a fine river bed sediment facies in river sections upstream of active dams and of removed dams with protection measures. Contrarily, high sediment connectivity which was accompanied by a coarse river bed sediment facies was observed in river sections either located downstream of active dams or of removed dams with upstream protection. In terms of channel changes, significant channel degradation was examined at locations downstream of active dams and of removed dams. Channel bed and bank protection measures prevent erosion and channel slope recovery after dam removal. Landscape evolution modeling revealed a complex geomorphic response to dam construction and dam removal as sediment output rates and therefore geomorphic processes have been shown to act in a non-linear manner. These insights are deemed to have major implications for river management and conservation, as quality and state of riverine habitats are determined by channel morphology and river bed sediment composition.
Whence a healthy mind: Correlation of physical fitness and academic performance among schoolchildren
Resumo:
Background. Public schools are a key forum in the fight for child health because of the opportunities they present for physical activity and fitness surveillance. However, because schools are evaluated and funded on the basis of standardized academic performance rather than physical activity, empirical research evaluating the connections between fitness and academic performance is needed to justify curriculum allocations to physical activity. ^ Methods. Analyses were based on a convenience sample of 315,092 individually-matched standardized academic (TAKS™) and fitness (FITNESSGRAM®) test records collected by 13 Texas school districts under state mandates. We categorized each fitness result in quintiles by age and gender and used a mixed effects regression model to compare the academic performance of the top and bottom fitness groups for each fitness test and grade level combination. ^ Results. All fitness variables except BMI showed significant, positive associations with academic performance after sociodemographic covariate adjustments, with effect sizes ranging from 0.07 (95% CI: 0.05,0.08) in girls trunklift-TAKS reading to 0.34 (0.32,0.35) in boys cardiovascular-TAKS math. Cardiovascular fitness showed the largest inter-quintile difference in TAKS score (32-75 points), followed by curl-ups. After an additional adjustment for BMI and curl-ups, cardiovascular associations peaked in 8th-9 th grades (maximum inter-quintile difference 142 TAKS points; effect size 0.75 (0.69,0.82) for 8th grade girls math) and showed dose-response characteristics across quintiles (p<0.001 for both genders and outcomes). BMI analysis demonstrated limited, non-linear association with academic performance after adjustment for sociodemographic, cardiovascular fitness and curl-up variables. Low-BMI Hispanic high school boys showed significantly lower TAKS scores than the moderate (but not high) BMI group. High-BMI non-Hispanic white high school girls showed significantly lower scores than the moderate (but not low) BMI group. ^ Conclusions. In this study, fitness was strongly and significantly related to academic performance. Cardiovascular fitness showed a distinct dose-response association with academic performance independent of other sociodemographic and fitness variables. The association peaked in late middle to early high school. The independent association of BMI to academic performance was only found in two sub-groups and was non-linear, with both low and high BMI posing risk relative to moderate BMI but not to each other. In light of our findings, we recommend that policymakers consider PE mandates in middle-high school and require linkage of academic and fitness records to facilitate longitudinal surveillance. School administrators should consider increasing PE time in pursuit of higher academic test scores, and PE practitioners should emphasize cardiovascular fitness over BMI reduction.^
Resumo:
With continuous new improvements in brachytherapy source designs and techniques, method of 3D dosimetry for treatment dose verifications would better ensure accurate patient radiotherapy treatment. This study was aimed to first evaluate the 3D dose distributions of the low-dose rate (LDR) Amersham 6711 OncoseedTM using PRESAGE® dosimeters to establish PRESAGE® as a suitable brachytherapy dosimeter. The new AgX100 125I seed model (Theragenics Corporation) was then characterized using PRESAGE® following the TG-43 protocol. PRESAGE® dosimeters are solid, polyurethane-based, 3D dosimeters doped with radiochromic leuco dyes that produce a linear optical density response to radiation dose. For this project, the radiochromic response in PRESAGE® was captured using optical-CT scanning (632 nm) and the final 3D dose matrix was reconstructed using the MATLAB software. An Amersham 6711 seed with an air-kerma strength of approximately 9 U was used to irradiate two dosimeters to 2 Gy and 11 Gy at 1 cm to evaluate dose rates in the r=1 cm to r=5 cm region. The dosimetry parameters were compared to the values published in the updated AAPM Report No. 51 (TG-43U1). An AgX100 seed with an air-kerma strength of about 6 U was used to irradiate two dosimeters to 3.6 Gy and 12.5 Gy at 1 cm. The dosimetry parameters for the AgX100 were compared to the values measured from previous Monte-Carlo and experimental studies. In general, the measured dose rate constant, anisotropy function, and radial dose function for the Amersham 6711 showed agreements better than 5% compared to consensus values in the r=1 to r=3 cm region. The dose rates and radial dose functions measured for the AgX100 agreed with the MCNPX and TLD-measured values within 3% in the r=1 to r=3 cm region. The measured anisotropy function in PRESAGE® showed relative differences of up to 9% with the MCNPX calculated values. It was determined that post-irradiation optical density change over several days was non-linear in different dose regions, and therefore the dose values in the r=4 to r=5 cm regions had higher uncertainty due to this effect. This study demonstrated that within the radial distance of 3 cm, brachytherapy dosimetry in PRESAGE® can be accurate within 5% as long as irradiation times are within 48 hours.
Resumo:
Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^
Resumo:
2-Chloro-9-(2-deoxy-2-fluoro-$\beta $-D-arabinofuranosyl)adenine(Cl-F-ara-A) is a new deoxyadenosine analogue which is resistant to phosphorolytic cleavage and deamination, and exhibits therapeutic activity for both leukemia and solid tumors in experimental systems. To characterize its mechanism of cytotoxicity, the present study investigated the cellular pharmacology and the biochemical and molecular mechanisms of action of Cl-F-ara-A, from entrance of the drug into the cell, chemical changes to active metabolites, targeting on different cellular enzymes, to final programmed cell death response to the drug treatment.^ Cl-F-ara-A exhibited potent inhibitory action on DNA synthesis in a concentration-dependent and irreversible manner. The mono-, di-, and triphosphates of Cl-F-ara-A accumulated in cells, and their elimination was non-linear with a prolonged terminal phase, which resulted in prolonged dNTP depression. Ribonucleotide reductase activity was inversely correlated with the cellular Cl-F-ara-ATP level, and the inhibition of the reductase was saturated at higher cellular Cl-F-ara-ATP concentrations. The sustained inhibition of ribonucleotide reductase and the consequent depletion of deoxynucleotide triphosphate pools result in a cellular Cl-F-ara-ATP to dATP ratio which favors analogue incorporation into DNA.^ Incubation of CCRF-CEM cells with Cl-F-ara-A resulted in the incorporation of Cl-F-ara-AMP into DNA. A much lesser amount was associated with RNA, suggesting that Cl-F-ara-A is a more DNA-directed compound. The site of Cl-F-ara-AMP in DNA was related to the ratio of the cellular concentrations of the analogue triphosphate and the natural substrate dATP. Clonogenicity assays showed a strong inverse correlation between cell survival and Cl-F-ara-AMP incorporation into DNA, suggesting that the incorporation of Cl-F-ara-A monophosphate into DNA is critical for the cytotoxicity of Cl-F-ara-A.^ Cl-F-ara-ATP competed with dATP for incorporation into the A-site of the extending DNA strand catalyzed by both DNA polymerase $\alpha$ and $\varepsilon$. The incorporation of Cl-F-ara-AMP into DNA resulted in termination of DNA strand elongation, with the most pronounced effect being observed at Cl-F-ara-ATP:dATP ratio $>$1. The presence of Cl-F-ara-AMP at the 3$\sp\prime$-terminus of DNA also resulted in an increased incidence of nucleotide misincorporation in the following nucleotide position. The DNA termination and the nucleotide misincorporation induced by the incorporation of Cl-F-ara-AMP into DNA may contribute to the cytotoxicity of Cl-F-ara-A. ^
Resumo:
We present Plio-Pleistocene records of sediment color, %CaCO3, foraminifer fragmentation, benthic carbon isotopes (d13C) and radiogenic isotopes (Sr, Nd, Pb) of the terrigenous component from IODP Site U1313, a reoccupation of benchmark subtropical North Atlantic Ocean DSDP Site 607. We show that (inter)glacial cycles in sediment color and %CaCO3 pre-date major northern hemisphere glaciation and are unambiguously and consistently correlated to benthic oxygen isotopes back to 3.3 million years ago (Ma) and intermittently so probably back to the Miocene/Pliocene boundary. We show these lithological cycles to be driven by enhanced glacial fluxes of terrigenous material (eolian dust), not carbonate dissolution (the classic interpretation). Our radiogenic isotope data indicate a North American source for this dust (~3.3-2.4 Ma) in keeping with the interpreted source of terrestrial plant wax-derived biomarkers deposited at Site U1313. Yet our data indicate a mid latitude provenance regardless of (inter)glacial state, a finding that is inconsistent with the biomarker-inferred importance of glaciogenic mechanisms of dust production and transport. Moreover, we find that the relation between the biomarker and lithogenic components of dust accumulation is distinctly non-linear. Both records show a jump in glacial rates of accumulation from Marine Isotope Stage, MIS, G6 (2.72 Ma) onwards but the amplitude of this signal is about 3-8 times greater for biomarkers than for dust and particularly extreme during MIS 100 (2.52 Ma). We conclude that North America shifted abruptly to a distinctly more arid glacial regime from MIS G6, but major shifts in glacial North American vegetation biomes and regional wind fields (exacerbated by the growth of a large Laurentide Ice Sheet during MIS 100) likely explain amplification of this signal in the biomarker records. Our findings are consistent with wetter-than-modern reconstructions of North American continental climate under the warm high CO2 conditions of the Early Pliocene but contrast with most model predictions for the response of the hydrological cycle to anthropogenic warming over the coming 50 years (poleward expansion of the subtropical dry zones).
Resumo:
Anthropogenic CO2 emissions have exacerbated two environmental stressors, global climate warming and ocean acidification (OA), that have serious implications for marine ecosystems. Coral reefs are vulnerable to climate change yet few studies have explored the potential for interactive effects of warming temperature and OA on an important coral reef calcifier, crustose coralline algae (CCA). Coralline algae serve many important ecosystem functions on coral reefs and are one of the most sensitive organisms to ocean acidification. We investigated the effects of elevated pCO2 and temperature on calcification of Hydrolithon onkodes, an important species of reef-building coralline algae, and the subsequent effects on susceptibility to grazing by sea urchins. H. onkodes was exposed to a fully factorial combination of pCO2 (420, 530, 830 µatm) and temperature (26, 29 °C) treatments, and calcification was measured by the change in buoyant weight after 21 days of treatment exposure. Temperature and pCO2 had a significant interactive effect on net calcification of H. onkodes that was driven by the increased calcification response to moderately elevated pCO2. We demonstrate that the CCA calcification response was variable and non-linear, and that there was a trend for highest calcification at ambient temperature. H. onkodes then was exposed to grazing by the sea urchin Echinothrix diadema, and grazing was quantified by the change in CCA buoyant weight from grazing trials. E. diadema removed 60% more CaCO3 from H. onkodes grown at high temperature and high pCO2 than at ambient temperature and low pCO2. The increased susceptibility to grazing in the high pCO2 treatment is among the first evidence indicating the potential for cascading effects of OA and temperature on coral reef organisms and their ecological interactions.
Resumo:
A large number of reinforced concrete (RC) frame structures built in earthquake-prone areas such as Haiti are vulnerable to strong ground motions. Structures in developing countries need low-cost seismic retrofit solutions to reduce their vulnerability. This paper investigates the feasibility of using masonry infill walls to reduce deformations and damage caused by strong ground motions in brittle and weak RC frames designed only for gravity loads. A numerical experiment was conducted in which several idealized prototypes representing RC frame structures of school buildings damaged during the Port-au-Prince earthquake (Haiti, 2010) were strengthened by adding elements representing masonry infill walls arranged in different configurations. Each configuration was characterized by the ratio Rm of the area of walls in the direction of the ground motion (in plan) installed in each story to the total floor area. The numerical representations of these idealized RC frame structures with different values of Rm were (hypothetically) subjected to three major earthquakes with peak ground accelerations of approximately 0.5g. The results of the non-linear dynamic response analyses were summarized in tentative relationships between Rm and four parameters commonly used to characterize the seismic response of structures: interstory drift, Park and Ang indexes of damage, and total amount of energy dissipated by the main frame. It was found that Rm=4% is a reasonable minimum design value for seismic retrofitting purposes in cases in which available resources are not sufficient to afford conventional retrofit measures.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
The present article shows a procedure to predict the flutter speed based on real-time tuning of a quasi non-linear aeroelastic model. A two-dimensional non-linear (freeplay) aeroeslastic model is implemented inMatLab/Simulink with incompressible aerodynamic conditions. A comparison with real compressible conditions is provided. Once the numerical validation is accomplished, a parametric aeroelastic model is built in order to describe the proposed procedure and contribute to reduce the number of flight hours needed to expand the flutter envelope.