854 resultados para GLUCOSE MONITORING-SYSTEM
Resumo:
Background To determine the characteristics of clinical care offered to type 1 diabetic patients across the four distinct regions of Brazil, with geographic and contrasting socioeconomic differences. Glycemic control, prevalence of cardiovascular risk factors, screening for chronic complications and the frequency that the recommended treatment goals were met using the American Diabetes Association guidelines were evaluated. Methods This was a cross-sectional, multicenter study conducted from December 2008 to December 2010 in 28 secondary and tertiary care public clinics in 20 Brazilian cities in north/northeast, mid-west, southeast and south regions. The data were obtained from 3,591 patients (56.0% females and 57.1% Caucasians) aged 21.2 ± 11.7 years with a disease duration of 9.6 ± 8.1 years (<1 to 50 years). Results Overall, 18.4% patients had HbA1c levels <7.0%, and 47.5% patients had HbA1c levels ≥ 9%. HbA1c levels were associated with lower economic status, female gender, age and the daily frequency of self-blood glucose monitoring (SBGM) but not with insulin regimen and geographic region. Hypertension was more frequent in the mid-west (32%) and north/northeast (25%) than in the southeast (19%) and south (17%) regions (p<0.001). More patients from the southeast region achieved LDL cholesterol goals and were treated with statins (p<0.001). Fewer patients from the north/northeast and mid-west regions were screened for retinopathy and nephropathy, compared with patients from the south and southeast. Patients from the south/southeast regions had more intensive insulin regimens than patients from the north/northeast and mid-west regions (p<0.001). The most common insulin therapy combination was intermediate-acting with regular human insulin, mainly in the north/northeast region (p<0.001). The combination of insulin glargine with lispro and glulisine was more frequently used in the mid-west region (p<0.001). Patients from the north/northeast region were younger, non-Caucasian, from lower economic status, used less continuous subcutaneous insulin infusion, performed less SBGM and were less overweight/obese (p<0.001). Conclusions A majority of patients, mainly in the north/northeast and mid-west regions, did not meet metabolic control goals and were not screened for diabetes-related chronic complications. These results should guide governmental health policy decisions, specific to each geographic region, to improve diabetes care and decrease the negative impact diabetes has on the public health system.
Resumo:
Between the years 1992 and 1995 about 3.5 million hadronic Z decays were collected by the DELPHI detector at CERN. This data has been used to measure the production and lifetime of the beauty strange baryon Ξb, in the inclusive decay channel Ξb →Ξ-ℓ- X. The Ξ- baryon was reconstructed through the decay Ξ- → Λ π-, using a constrained fit method for cascade decays. An iterative discriminant analysis was used for the Ξb selection. A search for the Ξb baryon was also performed using an alternative method of reconstructing the Ξ- baryon. A measurement of the production of the charmed strange baryon Ξc in the decay channel Ξc → Ξ-π+ using the same data is also presented. The radiation monitoring system of the Silicon Microstrip Tracker in the DØ detector is studied and used to estimate the radiation dose received by the Silicon detector during normal running conditions of the TeVatron accelerator.
Resumo:
In this work, we provide a passive location monitoring system for IEEE 802.15.4 signal emitters. The system adopts software defined radio techniques to passively overhear IEEE 802.15.4 packets and to extract power information from baseband signals. In our system, we provide a new model based on the nonlinear regression for ranging. After obtaining distance information, a Weighted Centroid (WC) algorithm is adopted to locate users. In WC, each weight is inversely proportional to the nth power of propagation distance, and the degree n is obtained from some initial measurements. We evaluate our system in a 16m-18m area with complex indoor propagation conditions. We are able to achieve a median error of 2:1m with only 4 anchor nodes.
Resumo:
PLACENTAL GLUCOSE TRANSPORTER (GLUT)-1 REGULATION IN PREECLAMPSIA Camilla Marini a,b, Benjamin P. Lüscher a,b, Marianne J€orger-Messerli a,b, Ruth Sager a,b, Xiao Huang c, Jürg Gertsch c, Matthias A. Hediger c, Christiane Albrecht c, Marc U. Baumann a,c, Daniel V. Surbek a,c a Department of Obstetrics and Gynecology, University Hospital of Bern, Bern, Switzerland, Switzerland; b Department of Clinical Research, University of Bern, Bern, Switzerland, Switzerland; c Institute for Biochemistry and Molecular Medicine, University of Bern, Bern, Switzerland, Switzerland Objectives: Glucose is a primary energy source for the fetus. The absence of significant gluconeogenesis in the fetus means that the fetal up-take of this vital nutrient is dependent on maternal supply and subsequent transplacental transport. Altered expression and/or function of placental transporters may affect the intrauterine environment and could compromise fetal and mother well-being. We speculated that pre-eclampsia (PE) impairs the placental glucose transport system. Methods: Placentae were obtained after elective caesarean sections following normal pregnancies and pre-eclamptic pregnancies. Syncytial basal membrane (BM) and apical microvillus membrane (MVM) fractions were prepared using differential ultra-centrifugation and magnesium precipitation. Protein expression was assessed by western blot analysis. mRNA levels in whole villous tissue lysate were quantified by real-time PCR. To assess glucose transport activity a radiolabeled substrate up-take assay and a transepithelial transport model using primary cytotrophoblasts were established. Results: GLUT1 mRNA expression was not changed in PE when compared to control, whereas protein expression was significantly down-regulated. Glucose up-take into syncytial microvesicles was reduced in PE compared to control. In a transepithelial transport model, phloretinmediated inhibition of GLUT1 at the apical side of primary cytotrophoblasts showed a 44% of reduction of transepithelial glucose transport at IC50. Conclusions: GLUT1 is down-regulated on protein and functional level in PE compared to control. Altering glucose transport activity by inhibition of apical GLUT-1 indicates that transplacental glucose transport might be regulated on the apical side of the syncytiotrophoblast. These results might help to understand better the regulation of GLUT1 transporter and maybe in future to develop preventive strategies to modulate the fetal programming and thereby reduce the incidence of disease for both the mother and her child later in life.
Resumo:
Placental Glucose Transporter (GLUT1) Expression in Pre- Eclampsia. INTRODUCTION: Glucose is the most important substrate for fetal growth. Indeed, there is no significant de novo glucose synthesis in the fetus and the fetal up-take of glucose rely on maternal supply and transplacental transport. Therefore, a defective placental transporter system may affect the intrauterine environment compromising fetal as well as mother well-being. On this line, we speculated that the placental glucose transport system could be impaired in pre-eclampsia (PE). METHODS: Placentae were obtained after elective caesarean sections following normal pregnancies and pre-eclamptic pregnancies. Syncytial basal membrane (BM) and apical microvillus membrane (MVM) fractions were prepared using differential ultra-centrifugation and magnesium precipitation. Protein expression was assessed by western blot. mRNA levels were quantified by quantitative real-time PCR. A radiolabeled substrate up-take assay was established to assess glucose transport activity. FACS analysis was performed to check the shape of MVM. Statistical analysis was performed using one way ANOVA test. RESULTS: GLUT1 protein levels were down-regulated (70%; P<0.01) in pre-eclamptic placentae when compared to control placentae. This data is in line with the reduced glucose up-take in MVM prepared from preeclamptic placentae. Of note, the mRNA levels of GLUT1 did not change between placentae affected by PE and normal placentae, suggesting that the levels of GLUT1 are post-transcriptionally regulated. FACS analysis on MVM vesicles from both normal placentae and pre-eclamptic placentae showed equal heterogeneity in the complexes formed. This excluded the possibility that the altered glucose up-take observed in pre-eclamptic MVM was caused by a different shape of these vesicles. CONCLUSIONS: Protein and functional studies of GLUT1 in MVM suggest that in pre-eclampsia the glucose transport between mother and fetus might be defective. To further investigate this important biological aspect we will increase the number of samples obtained from patients and use primary cells to study trans epithelial transport system in vitro.
Resumo:
The potential effects of climatic changes on natural risks are widely discussed. But the formulation of strategies for adapting risk management practice to climate changes requires knowledge of the related risks for people and economic values. The main goals of this work were (1) the development of a method for analysing and comparing risks induced by different natural hazard types, (2) highlighting the most relevant natural hazard processes and related damages, (3) the development of an information system for the monitoring of the temporal development of natural hazard risk and (4) the visualisation of the resulting information for the wider public. A comparative exposure analysis provides the basis for pointing out the hot spots of natural hazard risks in the province of Carinthia, Austria. An analysis of flood risks in all municipalities provides the basis for setting the priorities in the planning of flood protection measures. The methods form the basis for a monitoring system that periodically observes the temporal development of natural hazard risks. This makes it possible firstly to identify situations in which natural hazard risks are rising and secondly to differentiate between the most relevant factors responsible for the increasing risks. The factors that most influence the natural risks could be made evident.
Resumo:
Smartphone-App zur Kohlenhydratberechnung Neue Technologien wie Blutzuckersensoren und moderne Insulinpumpen prägten die Therapie des Typ-1-Diabetes (T1D) in den letzten Jahren in wesentlichem Ausmaß. Smartphones sind aufgrund ihrer rasanten technischen Entwicklung eine weitere Plattform für Applikationen zur Therapieunterstützung bei T1D. GoCARB Hierbei handelt es sich um ein zur Kohlenhydratberechnung entwickeltes System für Personen mit T1D. Die Basis für Endanwender stellt ein Smartphone mit Kamera dar. Zur Berechnung werden 2 mit dem Smartphone aus verschiedenen Winkeln aufgenommene Fotografien einer auf einem Teller angerichteten Mahlzeit benötigt. Zusätzlich ist eine neben dem Teller platzierte Referenzkarte erforderlich. Die Grundlage für die Kohlenhydratberechnung ist ein Computer-Vision-gestütztes Programm, das die Mahlzeiten aufgrund ihrer Farbe und Textur erkennt. Das Volumen der Mahlzeit wird mit Hilfe eines dreidimensional errechneten Modells bestimmt. Durch das Erkennen der Art der Mahlzeiten sowie deren Volumen kann GoCARB den Kohlenhydratanteil unter Einbeziehung von Nährwerttabellen berechnen. Für die Entwicklung des Systems wurde eine Bilddatenbank von mehr als 5000 Mahlzeiten erstellt und genutzt. Resümee Das GoCARB-System befindet sich aktuell in klinischer Evaluierung und ist noch nicht für Patienten verfügbar.
Resumo:
Considering the measurement procedures recommended by the ICNIRP, this communication is a proposal for a measurement procedure based in the maximum peak values of equivalent plane wave power density. This procedure has been included in a project being developed in Leganés, Spain. The project plans to deploy a real time monitoring system for RF to provide the city with a useful tool to adapt the environmental EM conditions to the new regulations approved. A first stage consisting of 105 measurement points has been finished and all the values are under the threshold of the regulation.
Resumo:
Geodetic volcano monitoring in Tenerife has mainly focused on the Las Cañadas Caldera, where a geodetic micronetwork and a levelling profile are located. A sensitivity test of this geodetic network showed that it should be extended to cover the whole island for volcano monitoring purposes. Furthermore, InSAR allowed detecting two unexpected movements that were beyond the scope of the traditional geodetic network. These two facts prompted us to design and observe a GPS network covering the whole of Tenerife that was monitored in August 2000. The results obtained were accurate to one centimetre, and confirm one of the deformations, although they were not definitive enough to confirm the second one. Furthermore, new cases of possible subsidence have been detected in areas where InSAR could not be used to measure deformation due to low coherence. A first modelling attempt has been made using a very simple model and its results seem to indicate that the deformation observed and the groundwater level variation in the island may be related. Future observations will be necessary for further validation and to study the time evolution of the displacements, carry out interpretation work using different types of data (gravity, gases, etc) and develop models that represent the island more closely. The results obtained are important because they might affect the geodetic volcano monitoring on the island, which will only be really useful if it is capable of distinguishing between displacements that might be linked to volcanic activity and those produced by other causes. One important result in this work is that a new geodetic monitoring system based on two complementary techniques, InSAR and GPS, has been set up on Tenerife island. This the first time that the whole surface of any of the volcanic Canary Islands has been covered with a single network for this purpose. This research has displayed the need for further similar studies in the Canary Islands, at least on the islands which pose a greater risk of volcanic reactivation, such as Lanzarote and La Palma, where InSAR techniques have been used already.
Resumo:
Here an inertial sensor-based monitoring system for measuring and analyzing upper limb movements is presented. The final goal is the integration of this motion-tracking device within a portable rehabilitation system for brain injury patients. A set of four inertial sensors mounted on a special garment worn by the patient provides the quaternions representing the patient upper limb’s orientation in space. A kinematic model is built to estimate 3D upper limb motion for accurate therapeutic evaluation. The human upper limb is represented as a kinematic chain of rigid bodies with three joints and six degrees of freedom. Validation of the system has been performed by co-registration of movements with a commercial optoelectronic tracking system. Successful results are shown that exhibit a high correlation among signals provided by both devices and obtained at the Institut Guttmann Neurorehabilitation Hospital.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist structure. The turning process that is a well-known machining process is selected for this case study. A four-input (i.e., time, cutting forces, vibrations and acoustic emissions signals) single-output (tool wear rate) model is designed and implemented on the basis of three neuro-fuzzy approaches (inductive, transductive and evolving neuro-fuzzy systems). The tool wear model is then used for monitoring the turning process. The comparative study demonstrates that the transductive neuro-fuzzy model provides better error-based performance indices for detecting tool wear than the inductive neuro-fuzzy model and than the evolving neuro-fuzzy model.
Resumo:
Upper limb function impairment is one of the most common sequelae of central nervous system injury, especially in stroke patients and when spinal cord injury produces tetraplegia. Conventional assessment methods cannot provide objective evaluation of patient performance and the tiveness of therapies. The most common assessment tools are based on rating scales, which are inefficient when measuring small changes and can yield subjective bias. In this study, we designed an inertial sensor-based monitoring system composed of five sensors to measure and analyze the complex movements of the upper limbs, which are common in activities of daily living. We developed a kinematic model with nine degrees of freedom to analyze upper limb and head movements in three dimensions. This system was then validated using a commercial optoelectronic system. These findings suggest that an inertial sensor-based motion tracking system can be used in patients who have upper limb impairment through data integration with a virtual reality-based neuroretation system.
Resumo:
In this paper the model of an Innovative Monitoring Network involving properly connected nodes to develop an Information and Communication Technology (ICT) solution for preventive maintenance of historical centres from early warnings is proposed. It is well known that the protection of historical centres generally goes from a large-scale monitoring to a local one and it could be supported by a unique ICT solution. More in detail, the models of a virtually organized monitoring system could enable the implementation of automated analyses by presenting various alert levels. An adequate ICT solution tool would allow to define a monitoring network for a shared processing of data and results. Thus, a possible retrofit solution could be planned for pilot cases shared among the nodes of the network on the basis of a suitable procedure utilizing a retrofit catalogue. The final objective would consist in providing a model of an innovative tool to identify hazards, damages and possible retrofit solutions for historical centres, assuring an easy early warning support for stakeholders. The action could proactively target the needs and requirements of users, such as decision makers responsible for damage mitigation and safeguarding of cultural heritage assets.
Resumo:
This paper presents an overview of the role and responsibilities of the system administrator, focusing on the need to monitor its technological infrastructure. The informatics infrastructure monitoring is, nowadays without a doubt, one of the main key points in business support. Large enterprises are no longer the only ones to feel the need to use these monitoring tools, but small and medium-sized enterprises, which also have IT environments of an increasing complexity, feel such a need. This results directly from the operation of how the business is supported on IT platforms as support for people and processes. When a system, which is vital to the organization, fails either at the hardware or software level, compromises the operating capacity and consequently the business continuity. Having this always in mind, it is extremely important to adopt monitoring systems that proactively or reactively, reduce the overall time of breaks caused by failures. A monitoring system is the way to ensure confidence in all components and the operational readiness of IT infrastructure.