58 resultados para based inspection and conditional monitoring
Resumo:
INTRODUCTION There are limited data on paediatric HIV care and treatment programmes in low-resource settings. METHODS A standardized survey was completed by International epidemiologic Databases to Evaluate AIDS paediatric cohort sites in the regions of Asia-Pacific (AP), Central Africa (CA), East Africa (EA), Southern Africa (SA) and West Africa (WA) to understand operational resource availability and paediatric management practices. Data were collected through January 2010 using a secure, web-based software program (REDCap). RESULTS A total of 64,552 children were under care at 63 clinics (AP, N=10; CA, N=4; EA, N=29; SA, N=10; WA, N=10). Most were in urban settings (N=41, 65%) and received funding from governments (N=51, 81%), PEPFAR (N=34, 54%), and/or the Global Fund (N=15, 24%). The majority were combined adult-paediatric clinics (N=36, 57%). Prevention of mother-to-child transmission was integrated at 35 (56%) sites; 89% (N=56) had access to DNA PCR for infant diagnosis. African (N=40/53) but not Asian sites recommended exclusive breastfeeding up until 4-6 months. Regular laboratory monitoring included CD4 (N=60, 95%), and viral load (N=24, 38%). Although 42 (67%) sites had the ability to conduct acid-fast bacilli (AFB) smears, 23 (37%) sites could conduct AFB cultures and 18 (29%) sites could conduct tuberculosis drug susceptibility testing. Loss to follow-up was defined as >3 months of lost contact for 25 (40%) sites, >6 months for 27 sites (43%) and >12 months for 6 sites (10%). Telephone calls (N=52, 83%) and outreach worker home visits to trace children lost to follow-up (N=45, 71%) were common. CONCLUSIONS In general, there was a high level of patient and laboratory monitoring within this multiregional paediatric cohort consortium that will facilitate detailed observational research studies. Practices will continue to be monitored as the WHO/UNAIDS Treatment 2.0 framework is implemented.
Resumo:
BACKGROUND Driving a car is a complex instrumental activity of daily living and driving performance is very sensitive to cognitive impairment. The assessment of driving-relevant cognition in older drivers is challenging and requires reliable and valid tests with good sensitivity and specificity to predict safe driving. Driving simulators can be used to test fitness to drive. Several studies have found strong correlation between driving simulator performance and on-the-road driving. However, access to driving simulators is restricted to specialists and simulators are too expensive, large, and complex to allow easy access to older drivers or physicians advising them. An easily accessible, Web-based, cognitive screening test could offer a solution to this problem. The World Wide Web allows easy dissemination of the test software and implementation of the scoring algorithm on a central server, allowing generation of a dynamically growing database with normative values and ensures that all users have access to the same up-to-date normative values. OBJECTIVE In this pilot study, we present the novel Web-based Bern Cognitive Screening Test (wBCST) and investigate whether it can predict poor simulated driving performance in healthy and cognitive-impaired participants. METHODS The wBCST performance and simulated driving performance have been analyzed in 26 healthy younger and 44 healthy older participants as well as in 10 older participants with cognitive impairment. Correlations between the two tests were calculated. Also, simulated driving performance was used to group the participants into good performers (n=70) and poor performers (n=10). A receiver-operating characteristic analysis was calculated to determine sensitivity and specificity of the wBCST in predicting simulated driving performance. RESULTS The mean wBCST score of the participants with poor simulated driving performance was reduced by 52%, compared to participants with good simulated driving performance (P<.001). The area under the receiver-operating characteristic curve was 0.80 with a 95% confidence interval 0.68-0.92. CONCLUSIONS When selecting a 75% test score as the cutoff, the novel test has 83% sensitivity, 70% specificity, and 81% efficiency, which are good values for a screening test. Overall, in this pilot study, the novel Web-based computer test appears to be a promising tool for supporting clinicians in fitness-to-drive assessments of older drivers. The Web-based distribution and scoring on a central computer will facilitate further evaluation of the novel test setup. We expect that in the near future, Web-based computer tests will become a valid and reliable tool for clinicians, for example, when assessing fitness to drive in older drivers.
Resumo:
Psychological models of mental disorders guide research into psychological and environmental factors that elicit and maintain mental disorders as well as interventions to reduce them. This paper addresses four areas. (1) Psychological models of mental disorders have become increasingly transdiagnostic, focusing on core cognitive endophenotypes of psychopathology from an integrative cognitive psychology perspective rather than offering explanations for unitary mental disorders. It is argued that psychological interventions for mental disorders will increasingly target specific cognitive dysfunctions rather than symptom-based mental disorders as a result. (2) Psychotherapy research still lacks a comprehensive conceptual framework that brings together the wide variety of findings, models and perspectives. Analysing the state-of-the-art in psychotherapy treatment research, “component analyses” aiming at an optimal identification of core ingredients and the mechanisms of change is highlighted as the core need towards improved efficacy and effectiveness of psychotherapy, and improved translation to routine care. (3) In order to provide more effective psychological interventions to children and adolescents, there is a need to develop new and/or improved psychotherapeutic interventions on the basis of developmental psychopathology research taking into account knowledge of mediators and moderators. Developmental neuroscience research might be instrumental to uncover associated aberrant brain processes in children and adolescents with mental health problems and to better examine mechanisms of their correction by means of psychotherapy and psychological interventions. (4) Psychotherapy research needs to broaden in terms of adoption of large-scale public health strategies and treatments that can be applied to more patients in a simpler and cost-effective way. Increased research on efficacy and moderators of Internet-based treatments and e-mental health tools (e.g. to support “real time” clinical decision-making to prevent treatment failure or relapse) might be one promising way forward.
Resumo:
Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, while also predicting hazardous events. In these mobile and dynamic (possible disaster) scenarios, opportunistic routing allows routing decisions in a completely distributed manner, by using a hop- by-hop route decision based on protocol-specific characteristics, and a predefined end-to-end path is not a reliable solution. This enables the transmission of video flows of a monitored area/object with Quality of Experience (QoE) support to users, headquarters or IoT platforms. However, existing approaches rely on a single metric to make the candidate selection rule, including link quality or geographic information, which causes a high packet loss rate, and reduces the video perception from the human standpoint. This article proposes a cross-layer Link quality and Geographical-aware Opportunistic routing protocol (LinGO), which is designed for video dissemination in mobile multimedia IoT environments. LinGO improves routing decisions using multiple metrics, including link quality, geographic loca- tion, and energy. The simulation results show the benefits of LinGO compared with well-known routing solutions for video transmission with QoE support in mobile scenarios.
Resumo:
High-resolution quantitative computed tomography (HRQCT)-based analysis of spinal bone density and microstructure, finite element analysis (FEA), and DXA were used to investigate the vertebral bone status of men with glucocorticoid-induced osteoporosis (GIO). DXA of L1–L3 and total hip, QCT of L1–L3, and HRQCT of T12 were available for 73 men (54.6±14.0years) with GIO. Prevalent vertebral fracture status was evaluated on radiographs using a semi-quantitative (SQ) score (normal=0 to severe fracture=3), and the spinal deformity index (SDI) score (sum of SQ scores of T4 to L4 vertebrae). Thirty-one (42.4%) subjects had prevalent vertebral fractures. Cortical BMD (Ct.BMD) and thickness (Ct.Th), trabecular BMD (Tb.BMD), apparent trabecular bone volume fraction (app.BV/TV), and apparent trabecular separation (app.Tb.Sp) were analyzed by HRQCT. Stiffness and strength of T12 were computed by HRQCT-based nonlinear FEA for axial compression, anterior bending and axial torsion. In logistic regressions adjusted for age, glucocorticoid dose and osteoporosis treatment, Tb.BMD was most closely associated with vertebral fracture status (standardized odds ratio [sOR]: Tb.BMD T12: 4.05 [95% CI: 1.8–9.0], Tb.BMD L1–L3: 3.95 [1.8–8.9]). Strength divided by cross-sectional area for axial compression showed the most significant association with spine fracture status among FEA variables (2.56 [1.29–5.07]). SDI was best predicted by a microstructural model using Ct.Th and app.Tb.Sp (r2=0.57, p<0.001). Spinal or hip DXA measurements did not show significant associations with fracture status or severity. In this cross-sectional study of males with GIO, QCT, HRQCT-based measurements and FEA variables were superior to DXA in discriminating between patients of differing prevalent vertebral fracture status. A microstructural model combining aspects of cortical and trabecular bone reflected fracture severity most accurately.
Resumo:
Mapping ecosystem services (ES) and their trade-offs is a key requirement for informed decision making for land use planning and management of natural resources that aim to move towards increasing the sustainability of landscapes. The negotiations of the purposes of landscapes and the services they should provide are difficult as there is an increasing number of stakeholders active at different levels with a variety of interests present on one particular landscape.Traditionally, land cover data is at the basis for mapping and spatial monitoring of ecosystem services. In light of complex landscapes it is however questionable whether land cover per se and as a spatial base unit is suitable for monitoring and management at the meso-scale. Often the characteristics of a landscape are defined by prevalence, composition and specific spatial and temporal patterns of different land cover types. The spatial delineation of shifting cultivation agriculture represents a prominent example of a land use system with its different land use intensities that requires alternative methodologies that go beyond the common remote sensing approaches of pixel-based land cover analysis due to the spatial and temporal dynamics of rotating cultivated and fallow fields.Against this background we advocate that adopting a landscape perspective to spatial planning and decision making offers new space for negotiation and collaboration, taking into account the needs of local resource users, and of the global community. For this purpose we introduce landscape mosaicsdefined as new spatial unit describing generalized land use types. Landscape mosaics have allowed us to chart different land use systems and land use intensities and permitted us to delineate changes in these land use systems based on changes of external claims on these landscapes. The underlying idea behindthe landscape mosaics is to use land cover data typically derived from remote sensing data and to analyse and classify spatial patterns of this land cover data using a moving window approach. We developed the landscape mosaics approach in tropical, forest dominated landscapesparticularly shifting cultivation areas and present examples ofour work from northern Laos, eastern Madagascarand Yunnan Province in China.
Resumo:
BACKGROUND The number of older adults in the global population is increasing. This demographic shift leads to an increasing prevalence of age-associated disorders, such as Alzheimer's disease and other types of dementia. With the progression of the disease, the risk for institutional care increases, which contrasts with the desire of most patients to stay in their home environment. Despite doctors' and caregivers' awareness of the patient's cognitive status, they are often uncertain about its consequences on activities of daily living (ADL). To provide effective care, they need to know how patients cope with ADL, in particular, the estimation of risks associated with the cognitive decline. The occurrence, performance, and duration of different ADL are important indicators of functional ability. The patient's ability to cope with these activities is traditionally assessed with questionnaires, which has disadvantages (eg, lack of reliability and sensitivity). Several groups have proposed sensor-based systems to recognize and quantify these activities in the patient's home. Combined with Web technology, these systems can inform caregivers about their patients in real-time (e.g., via smartphone). OBJECTIVE We hypothesize that a non-intrusive system, which does not use body-mounted sensors, video-based imaging, and microphone recordings would be better suited for use in dementia patients. Since it does not require patient's attention and compliance, such a system might be well accepted by patients. We present a passive, Web-based, non-intrusive, assistive technology system that recognizes and classifies ADL. METHODS The components of this novel assistive technology system were wireless sensors distributed in every room of the participant's home and a central computer unit (CCU). The environmental data were acquired for 20 days (per participant) and then stored and processed on the CCU. In consultation with medical experts, eight ADL were classified. RESULTS In this study, 10 healthy participants (6 women, 4 men; mean age 48.8 years; SD 20.0 years; age range 28-79 years) were included. For explorative purposes, one female Alzheimer patient (Montreal Cognitive Assessment score=23, Timed Up and Go=19.8 seconds, Trail Making Test A=84.3 seconds, Trail Making Test B=146 seconds) was measured in parallel with the healthy subjects. In total, 1317 ADL were performed by the participants, 1211 ADL were classified correctly, and 106 ADL were missed. This led to an overall sensitivity of 91.27% and a specificity of 92.52%. Each subject performed an average of 134.8 ADL (SD 75). CONCLUSIONS The non-intrusive wireless sensor system can acquire environmental data essential for the classification of activities of daily living. By analyzing retrieved data, it is possible to distinguish and assign data patterns to subjects' specific activities and to identify eight different activities in daily living. The Web-based technology allows the system to improve care and provides valuable information about the patient in real-time.
Resumo:
This study focuses on relations between 7- and 9-year-old children’s and adults’ metacognitive monitoring and control processes. In addition to explicit confidence judgments (CJ), data for participants’ control behavior during learning and recall as well as implicit CJs were collected with an eye-tracking device (Tobii 1750). Results revealed developmental progression in both accuracy of implicit and explicit monitoring across age groups. In addition, efficiency of learning and recall strategies increases with age, as older participants allocate more fixation time to critical information and less time to peripheral or potentially interfering information. Correlational analyses, recall performance, metacognitive monitoring, and controlling indicate significant interrelations between all of these measures, with varying patterns of correlations within age groups. Results are discussed in regard to the intricate relationship between monitoring and recall and their relation to performance.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
Until today, most of the documentation of forensic relevant medical findings is limited to traditional 2D photography, 2D conventional radiographs, sketches and verbal description. There are still some limitations of the classic documentation in forensic science especially if a 3D documentation is necessary. The goal of this paper is to demonstrate new 3D real data based geo-metric technology approaches. This paper present approaches to a 3D geo-metric documentation of injuries on the body surface and internal injuries in the living and deceased cases. Using modern imaging methods such as photogrammetry, optical surface and radiological CT/MRI scanning in combination it could be demonstrated that a real, full 3D data based individual documentation of the body surface and internal structures is possible in a non-invasive and non-destructive manner. Using the data merging/fusing and animation possibilities, it is possible to answer reconstructive questions of the dynamic development of patterned injuries (morphologic imprints) and to evaluate the possibility, that they are matchable or linkable to suspected injury-causing instruments. For the first time, to our knowledge, the method of optical and radiological 3D scanning was used to document the forensic relevant injuries of human body in combination with vehicle damages. By this complementary documentation approach, individual forensic real data based analysis and animation were possible linking body injuries to vehicle deformations or damages. These data allow conclusions to be drawn for automobile accident research, optimization of vehicle safety (pedestrian and passenger) and for further development of crash dummies. Real 3D data based documentation opens a new horizon for scientific reconstruction and animation by bringing added value and a real quality improvement in forensic science.
Resumo:
This chapter aims to overcome the gap existing between case study research, which typically provides qualitative and process-based insights, and national or global inventories that typically offer spatially explicit and quantitative analysis of broader patterns, and thus to present adequate evidence for policymaking regarding large-scale land acquisitions. Therefore, the chapter links spatial patterns of land acquisitions to underlying implementation processes of land allocation. Methodologically linking the described patterns and processes proved difficult, but we have identified indicators that could be added to inventories and monitoring systems to make linkage possible. Combining complementary approaches in this way may help to determine where policy space exists for more sustainable governance of land acquisitions, both geographically and with regard to processes of agrarian transitions. Our spatial analysis revealed two general patterns: (i) relatively large forestry-related acquisitions that target forested landscapes and often interfere with semi-subsistence farming systems; and (ii) smaller agriculture-related acquisitions that often target existing cropland and also interfere with semi-subsistence systems. Furthermore, our meta-analysis of land acquisition implementation processes shows that authoritarian, top-down processes dominate. Initially, the demands of powerful regional and domestic investors tend to override socio-ecological variables, local actors’ interests, and land governance mechanisms. As available land grows scarce, however, and local actors gain experience dealing with land acquisitions, it appears that land investments begin to fail or give way to more inclusive, bottom-up investment models.
Resumo:
Soils are fundamental to ensuring water, energy and food security. Within the context of sus- tainable food production, it is important to share knowledge on existing and emerging tech- nologies that support land and soil monitoring. Technologies, such as remote sensing, mobile soil testing, and digital soil mapping, have the potential to identify degraded and non- /little-responsive soils, and may also provide a basis for programmes targeting the protection and rehabilitation of soils. In the absence of such information, crop production assessments are often not based on the spatio-temporal variability in soil characteristics. In addition, uncertain- ties in soil information systems are notable and build up when predictions are used for monitor- ing soil properties or biophysical modelling. Consequently, interpretations of model-based results have to be done cautiously. As such they provide a scientific, but not always manage- able, basis for farmers and/or policymakers. In general, the key incentives for stakeholders to aim for sustainable management of soils and more resilient food systems are complex at farm as well as higher levels. The same is true of drivers of soil degradation. The decision- making process aimed at sustainable soil management, be that at farm or higher level, also in- volves other goals and objectives valued by stakeholders, e.g. land governance, improved envi- ronmental quality, climate change adaptation and mitigation etc. In this dialogue session we will share ideas on recent developments in the discourse on soils, their functions and the role of soil and land information in enhancing food system resilience.
Resumo:
Environmental quality monitoring of water resources is challenged with providing the basis for safeguarding the environment against adverse biological effects of anthropogenic chemical contamination from diffuse and point sources. While current regulatory efforts focus on monitoring and assessing a few legacy chemicals, many more anthropogenic chemicals can be detected simultaneously in our aquatic resources. However, exposure to chemical mixtures does not necessarily translate into adverse biological effects nor clearly shows whether mitigation measures are needed. Thus, the question which mixtures are present and which have associated combined effects becomes central for defining adequate monitoring and assessment strategies. Here we describe the vision of the international, EU-funded project SOLUTIONS, where three routes are explored to link the occurrence of chemical mixtures at specific sites to the assessment of adverse biological combination effects. First of all, multi-residue target and non-target screening techniques covering a broader range of anticipated chemicals co-occurring in the environment are being developed. By improving sensitivity and detection limits for known bioactive compounds of concern, new analytical chemistry data for multiple components can be obtained and used to characterise priority mixtures. This information on chemical occurrence will be used to predict mixture toxicity and to derive combined effect estimates suitable for advancing environmental quality standards. Secondly, bioanalytical tools will be explored to provide aggregate bioactivity measures integrating all components that produce common (adverse) outcomes even for mixtures of varying compositions. The ambition is to provide comprehensive arrays of effect-based tools and trait-based field observations that link multiple chemical exposures to various environmental protection goals more directly and to provide improved in situ observations for impact assessment of mixtures. Thirdly, effect-directed analysis (EDA) will be applied to identify major drivers of mixture toxicity. Refinements of EDA include the use of statistical approaches with monitoring information for guidance of experimental EDA studies. These three approaches will be explored using case studies at the Danube and Rhine river basins as well as rivers of the Iberian Peninsula. The synthesis of findings will be organised to provide guidance for future solution-oriented environmental monitoring and explore more systematic ways to assess mixture exposures and combination effects in future water quality monitoring.