898 resultados para Time and hardware redundancy
DESIGN AND IMPLEMENT DYNAMIC PROGRAMMING BASED DISCRETE POWER LEVEL SMART HOME SCHEDULING USING FPGA
Resumo:
With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.
Resumo:
Quantitative meta-analyses of randomized clinical trials investigating the specific therapeutic efficacy of homeopathic remedies yielded statistically significant differences compared to placebo. Since the remedies used contained mostly only very low concentrations of pharmacologically active compounds, these effects cannot be accounted for within the framework of current pharmacology. Theories to explain clinical effects of homeopathic remedies are partially based upon changes in diluent structure. To investigate the latter, we measured for the first time high-field (600/500 MHz) 1H T1 and T2 nuclear magnetic resonance relaxation times of H2O in homeopathic preparations with concurrent contamination control by inductively coupled plasma mass spectrometry (ICP-MS). Homeopathic preparations of quartz (10c–30c, n = 21, corresponding to iterative dilutions of 100−10–100−30), sulfur (13x–30x, n = 18, 10−13–10−30), and copper sulfate (11c–30c, n = 20, 100−11–100−30) were compared to n = 10 independent controls each (analogously agitated dilution medium) in randomized and blinded experiments. In none of the samples, the concentration of any element analyzed by ICP-MS exceeded 10 ppb. In the first measurement series (600 MHz), there was a significant increase in T1 for all samples as a function of time, and there were no significant differences between homeopathic potencies and controls. In the second measurement series (500 MHz) 1 year after preparation, we observed statistically significant increased T1 relaxation times for homeopathic sulfur preparations compared to controls. Fifteen out of 18 correlations between sample triplicates were higher for controls than for homeopathic preparations. No conclusive explanation for these phenomena can be given at present. Possible hypotheses involve differential leaching from the measurement vessel walls or a change in water molecule dynamics, i.e., in rotational correlation time and/or diffusion. Homeopathic preparations thus may exhibit specific physicochemical properties that need to be determined in detail in future investigations.
Resumo:
There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount of detail to be represented. Moreover, the interactions between the objects (grass, leaves) and external forces such as wind are complex to model. In this paper we concentrate in the rendering of falling leaves at low cost. We present a technique that exploits graphics hardware in order to render thousands of leaves with different falling paths in real time and low memory requirements.
Resumo:
BACKGROUND Anesthetics and neuraxial anesthesia commonly result in vasodilation/hypotension. Norepinephrine counteracts this effect and thus allows for decreased intraoperative hydration. The authors investigated whether this approach could result in reduced postoperative complication rate. METHODS In this single-center, double-blind, randomized, superiority trial, 166 patients undergoing radical cystectomy and urinary diversion were equally allocated to receive 1 ml·kg·h of balanced Ringer's solution until the end of cystectomy and then 3 ml·kg·h until the end of surgery combined with preemptive norepinephrine infusion at an initial rate of 2 µg·kg·h (low-volume group; n = 83) or 6 ml·kg·h of balanced Ringer's solution throughout surgery (control group; n = 83). Primary outcome was the in-hospital complication rate. Secondary outcomes were hospitalization time, and 90-day mortality. RESULTS In-hospital complications occurred in 43 of 83 patients (52%) in the low-volume group and in 61 of 83 (73%) in the control group (relative risk, 0.70; 95% CI, 0.55-0.88; P = 0.006). The rates of gastrointestinal and cardiac complications were lower in the low-volume group than in the control group (5 [6%] vs. 31 [37%]; relative risk, 0.16; 95% CI, 0.07-0.39; P < 0.0001 and 17 [20%] vs. 39 [48%], relative risk, 0.43; 95% CI, 0.26-0.60; P = 0.0003, respectively). The median hospitalization time was 15 days [range, 11, 27d] in the low-volume group and 17 days [11, 95d] in the control group (P = 0.02). The 90-day mortality was 0% in the low-volume group and 4.8% in the control group (P = 0.12). CONCLUSION A restrictive-deferred hydration combined with preemptive norepinephrine infusion during radical cystectomy and urinary diversion significantly reduced the postoperative complication rate and hospitalization time.
Resumo:
AIM: The aim of this research is to assess the associations between subjective pubertal timing (SPT) and onset of health-compromising behaviours among girls reporting an on-time objective pubertal timing (OPT). METHODS: Data were drawn from the Swiss SMASH 2002 survey, a self-administered questionnaire study conducted among a nationally representative sample of 7548 adolescents aged 16-20 years. From the 3658 girls in the initial sample, we selected only those (n = 1003) who provided information about SPT and who reported the average age at menarche, namely 13, considering this as an on-time OPT. Bivariate and logistic analyses were conducted to compare the early, on-time and late SPT groups in terms of onset of health-compromising behaviours. RESULTS: A perception of pubertal precocity was associated with sexual intercourse before age 16 [adjusted odds ratio (AOR): 2.10 (1.30-3.37)] and early use of illegal drugs other than cannabis [AOR: 2.55 (1.30-5.02)]. Conversely, girls perceiving their puberty as late were less likely to report intercourse before age 16 [AOR: 0.30 (0.12-0.75)]. CONCLUSION: Faced with an adolescent girl perceiving her puberty as early, the practitioner should investigate the existence of health-compromising behaviours even if her puberty is or was objectively on-time.
Resumo:
PURPOSE To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. MATERIALS AND METHODS This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. RESULTS All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). CONCLUSION Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.
Resumo:
The objectives of this research were (1) to study the effect of contact pressure, compression time, and liquid (moisture content of the fabric) on the transfer by sliding contact of non-fixed surface contamination to protective clothing constructed from uncoated, woven fabrics, (2) to study the effect of contact pressure, compression time, and liquid content on the subsequent penetration through the fabric, and (3) to determine if varying the type of contaminant changes the effect of contact pressure, compression time, and liquid content on the transfer by sliding contact and penetration of non-fixed surface contamination. ^ It was found that the combined influence of the liquid (moisture content of the fabric), load (contact pressure), compression time, and their interactions significantly influenced the penetration of all three test agents, sucrose- 14C, triolein-3H, and starch-14C through 100% cotton fabric. The combined influence of the statistically significant main effects and their interactions increased the penetration of triolein- 3H by 32,548%, sucrose-14C by 7,006%, and starch- 14C by 1,900%. ^
Resumo:
Background. The high prevalence of obesity among children has spurred creation of a list of possible causative factors, including the advertising of foods of minimal nutritional value, a decrease in physical activity, and increased media use. Few studies show prevalence rates of these factors among large cohorts of children. ^ Methods. Using data from the 2004-2005 School Physical Activity and Nutrition project (SPAN), a secondary analysis of 7907 4th-grade children (mean age 9.74 years) was conducted. In addition, a comic-book–based intervention that addressed advertised food consumption, physical activity, and media use was developed and evaluated using a pre-post test design among 4th-grade children in an urban school district. ^ Results. Among a cohort of 4th-grade children across the state of Texas, children who had more than 2 hours of video game or computer time the previous day were more than twice as likely to drink soda and eat candy or pastries. In addition, children who watched more than 2 hours of TV the previous day were more than three times as likely to consume chips, punch, soda, candy, frozen desserts, or pastries (AOR 3.41, 95% CI: 1.58, 7.37). A comic-book based intervention held great promise and acceptance among 4th-grade children. Outcome evaluation showed that while results moved in a positive direction, they were not statistically significant. ^ Conclusion. Statistically significant associations were found between screen time and eating various types of advertised food. The comic book intervention was widely accepted by the children exposed to it, and pre-post surveys indicated they moved constructs in a positive direction. Further research is needed to look at more specific ways in which children are exposed to TV, and the relationship of the TV viewing time with their consumption of advertised foods. In addition, researchers should look at comic book interventions more closely and attempt to utilize them in more in studies with a longer follow-up time. ^
Resumo:
Recent data have shown that the percentage of time spent preparing food has decreased during the past few years, and little information is know about how much time people spend grocery shopping. Food that is pre-prepared is often higher in calories and fat compared to foods prepared at home from scratch. It has been suggested that, because of the higher energy and total fat levels, increased consumption of pre-prepared foods compared to home-cooked meals can lead to weight gain, which in turn can lead to obesity. Nevertheless, to date no study has examined this relationship. The purpose of this study is to determine (i) the association between adult body mass index (BMI) and the time spent preparing meals, and (ii) the association between adult BMI and time spent shopping for food. Data on food habits and body size were collected with a self-report survey of ethnically diverse adults between the ages of 17 and 70 at a large university. The survey was used to recruit people to participate in nutrition or appetite studies. Among other data, the survey collected demographic data (gender, race/ethnicity), minutes per week spent in preparing meals and minutes per week spent grocery shopping. Height and weight were self-reported and used to calculate BMI. The study population consisted of 689 subjects, of which 276 were male and 413 were female. The mean age was 23.5 years, with a median age of 21 years. The fraction of subjects with BMI less than 24.9 was 65%, between 25 and 29.9 was 26%, and 30 or greater was 9%. Analysis of variation was used to examine associations between food preparation time and BMI. ^ The results of the study showed that there were no significant statistical association between adult healthy weight, overweight and obesity with either food preparation time and grocery shopping time. Of those in the sample who reported preparing food, the mean food preparation time per week for the healthy weight, overweight, and obese groups were 12.8 minutes, 12.3 minutes, and 11.6 minutes respectively. Similarly, the mean weekly grocery shopping for healthy, overweight, and obese groups were 60.3 minutes per week (8.6min./day), 61.4 minutes (8.8min./day), and 57.3 minutes (8.2min./day), respectively. Since this study was conducted through a University campus, it is assumed that most of the sample was students, and a percentage might have been utilizing meal plans on campus, and thus, would have reported little meal preparation or grocery shopping time. Further research should examine the relationships between meal preparation time and time spent shopping for food in a sample that is more representative of the general public. In addition, most people spent very little time preparing food, and thus, health promotion programs for this population need to focus on strategies for preparing quick meals or eating in restaurants/cafeterias. ^
Resumo:
Considering the broader context of school reform that is seeking education strategies that might deliver substantial impact, this article examines four questions related to the policy and practice of expanding learning time: (a) why do educators find the standard American school calendar insufficient to meet students’ educational needs, especially those of disadvantaged students? (b) how do educators implement a longer day and/or year, addressing concerns about both educational quality and costs? (c) what does research report about outcomes of expanding time in schools? and (d) what are the future prospects for increasing the number of expanded-time schools? The paper examines these questions by considering research, policy, and practice at the national level and, throughout, by drawing upon additional evidence from Massachusetts, one of the leading states in the expanded-time movement. In considering the latter two questions, the article explores the knowns and unknowns related to expanded learning time and offers suggestions for further research.
Resumo:
The combined impacts of future scenarios of ocean acidification and global warming on the larvae of a cold-eurythermal spider crab, Hyas araneus L., were investigated in one of its southernmost populations (living around Helgoland, southern North Sea, 54°N) and one of the northernmost populations (Svalbard, North Atlantic, 79°N). Larvae were exposed at temperatures of 3, 9 and 15°C to present day normocapnia (380 ppm CO2) and to CO2 conditions expected for the near or medium-term future (710 ppm by 2100 and 3000 ppm CO2 by 2300 and beyond). Larval development time and biochemical composition were studied in the larval stages Zoea I, II, and Megalopa. Permanent differences in instar duration between both populations were detected in all stages, likely as a result of evolutionary temperature adaptation. With the exception of Zoea II at 3°C and under all CO2 conditions, development in all instars from Svalbard was delayed compared to those from Helgoland, under all conditions. Most prominently, development was much longer and fewer specimens morphosed to the first crab instar in the Megalopa from Svalbard than from Helgoland. Enhanced CO2 levels (710 and particularly 3000 ppm), caused extended duration of larval development and reduced larval growth (measured as dry mass) and fitness (decreasing C/N ratio, a proxy of the lipid content). Such effects were strongest in the zoeal stages in Svalbard larvae, and during the Megalopa instar in Helgoland larvae.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
A cell’s ability to effectively communicate with a neighboring cell is essential for tissue function and ultimately for the organism to which it belongs. One important mode of intercellular communication is the release of soluble cyto- and chemokines. Once secreted, these signaling molecules diffuse through the surrounding medium and eventually bind to neighboring cell’s receptors whereby the signal is received. This mode of communication is governed both by physicochemical transport processes and cellular secretion rates, which in turn are determined by genetic and biochemical processes. The characteristics of transport processes have been known for some time, and information on the genetic and biochemical determinants of cellular function is rapidly growing. Simultaneous quantitative analysis of the two is required to systematically evaluate the nature and limitations of intercellular signaling. The present study uses a solitary cell model to estimate effective communication distances over which a single cell can meaningfully propagate a soluble signal. The analysis reveals that: (i) this process is governed by a single, key, dimensionless group that is a ratio of biological parameters and physicochemical determinants; (ii) this ratio has a maximal value; (iii) for realistic values of the parameters contained in this dimensionless group, it is estimated that the domain that a single cell can effectively communicate in is ≈250 μm in size; and (iv) the communication within this domain takes place in 10–30 minutes. These results have fundamental implications for interpretation of organ physiology and for engineering tissue function ex vivo.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.