848 resultados para User friendly interface
Resumo:
Advances in laboratory techniques have led to a rapidly increasing use of biomarkers in epidemiological studies. Biomarkers of internal dose, early biological change, susceptibility and clinical outcomes are used as proxies for investigating interactions between external and / or endogenous agents and body components or processes. The need for improved reporting of scientific research led to influential statements of recommendations such as the STrengthening Reporting of OBservational studies in Epidemiology (STROBE) statement. The STROBE initiative established in 2004 aimed to provide guidance on how to report observational research. Its guidelines provide a user-friendly checklist of 22 items to be reported in epidemiological studies, with items specific to the three main study designs: cohort studies, case-control studies and cross-sectional studies. The present STrengthening the Reporting of OBservational studies in Epidemiology - Molecular Epidemiology (STROBE-ME) initiative builds on the STROBE statement implementing nine existing items of STROBE and providing 17 additional items to the 22 items of STROBE checklist. The additions relate to the use of biomarkers in epidemiological studies, concerning collection, handling and storage of biological samples; laboratory methods, validity and reliability of biomarkers; specificities of study design; and ethical considerations. The STROBE-ME recommendations are intended to complement the STROBE recommendations.
Resumo:
Advances in laboratory techniques have led to a rapidly increasing use of biomarkers in epidemiological studies. Biomarkers of internal dose, early biological change, susceptibility and clinical outcomes are used as proxies for investigating interactions between external and/or endogenous agents and body components or processes. The need for improved reporting of scientific research led to influential statements of recommendations such as the STrengthening Reporting of OBservational studies in Epidemiology (STROBE) statement. The STROBE initiative established in 2004 aimed to provide guidance on how to report observational research. Its guidelines provide a user-friendly checklist of 22 items to be reported in epidemiological studies, with items specific to the three main study designs: cohort studies, case-control studies and cross-sectional studies. The present STrengthening the Reporting of OBservational studies in Epidemiology -Molecular Epidemiology (STROBE-ME) initiative builds on the STROBE statement implementing nine existing items of STROBE and providing 17 additional items to the 22 items of STROBE checklist. The additions relate to the use of biomarkers in epidemiological studies, concerning collection, handling and storage of biological samples; laboratory methods, validity and reliability of biomarkers; specificities of study design; and ethical considerations. The STROBE-ME recommendations are intended to complement the STROBE recommendations.
Resumo:
Advances in laboratory techniques have led to a rapidly increasing use of biomarkers in epidemiological studies. Biomarkers of internal dose, early biological change, susceptibility, and clinical outcomes are used as proxies for investigating the interactions between external and/or endogenous agents and the body components or processes. The need for improved reporting of scientific research led to influential statements of recommendations such as STrengthening Reporting of Observational studies in Epidemiology (STROBE) statement. The STROBE initiative established in 2004 aimed to provide guidance on how to report observational research. Its guidelines provide a user-friendly checklist of 22 items to be reported in epidemiological studies, with items specific to the three main study designs: cohort studies, case-control studies and cross-sectional studies. The present STrengthening the Reporting of OBservational studies in Epidemiology - Molecular Epidemiology (STROBE-ME) initiative builds on the STROBE Statement implementing 9 existing items of STROBE and providing 17 additional items to the 22 items of STROBE checklist. The additions relate to the use of biomarkers in epidemiological studies, concerning collection, handling and storage of biological samples; laboratory methods, validity and reliability of biomarkers; specificities of study design; and ethical considerations. The STROBE-ME recommendations are intended to complement the STROBE recommendations.
Resumo:
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Resumo:
This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
The identification and accurate location of centers of brain activity are vital both in neuro-surgery and brain research. This study aimed to provide a non-invasive, non-contact, accurate, rapid and user-friendly means of producing functional images intraoperatively. To this end a full field Laser Doppler imager was developed and integrated within the surgical microscope and perfusion images of the cortical surface were acquired during awake surgery whilst the patient performed a predetermined task. The regions of brain activity showed a clear signal (10-20% with respect to the baseline) related to the stimulation protocol which lead to intraoperative functional brain maps of strong statistical significance and which correlate well with the preoperative fMRI and intraoperative cortical electro-stimulation. These initial results achieved with a prototype device and wavelet based regressor analysis (the hemodynamic response function being derived from MRI applications) demonstrate the feasibility of LDI as an appropriate technique for intraoperative functional brain imaging.
Resumo:
For smart cities applications, a key requirement is to disseminate data collected from both scalar and multimedia wireless sensor networks to thousands of end-users. Furthermore, the information must be delivered to non-specialist users in a simple, intuitive and transparent manner. In this context, we present Sensor4Cities, a user-friendly tool that enables data dissemination to large audiences, by using using social networks, or/and web pages. The user can request and receive monitored information by using social networks, e.g., Twitter and Facebook, due to their popularity, user-friendly interfaces and easy dissemination. Additionally, the user can collect or share information from smart cities services, by using web pages, which also include a mobile version for smartphones. Finally, the tool could be configured to periodically monitor the environmental conditions, specific behaviors or abnormal events, and notify users in an asynchronous manner. Sensor4Cities improves the data delivery for individuals or groups of users of smart cities applications and encourages the development of new user-friendly services.
Resumo:
The ever increasing popularity of apps stems from their ability to provide highly customized services to the user. The flip side is that in order to provide such services, apps need access to very sensitive private information about the user. This leads to malicious apps that collect personal user information in the background and exploit it in various ways. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is a comprehensive and usable mobile privacy solution, one that not only protects the user's location information, but also other equally sensitive user data such as the user's contacts and documents. A solution that is usable by the average user who does not understand or care about the low level technical details. To bridge this gap, we propose privacy metrics that quantify low-level app accesses in terms of privacy impact and transforms them to high-level user understandable ratings. We also provide the design and architecture of our Privacy Panel app that represents the computed ratings in a graphical user-friendly format and allows the user to define policies based on them. Finally, experimental results are given to validate the scalability of the proposed solution.
Resumo:
GuideView is a system designed for structured, multi-modal delivery of clinical guidelines. Clinical instructions are presented simultaneously in voice, text, pictures or video or animations. Users navigate using mouse-clicks and voice commands. An evaluation study performed at a medical simulation laboratory found that voice and video instructions were rated highly.
Resumo:
OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.
Resumo:
The recognition of the importance of mRNA turnover in regulating eukaryotic gene expression has mandated the development of reliable, rigorous, and "user-friendly" methods to accurately measure changes in mRNA stability in mammalian cells. Frequently, mRNA stability is studied indirectly by analyzing the steady-state level of mRNA in the cytoplasm; in this case, changes in mRNA abundance are assumed to reflect only mRNA degradation, an assumption that is not always correct. Although direct measurements of mRNA decay rate can be performed with kinetic labeling techniques and transcriptional inhibitors, these techniques often introduce significant changes in cell physiology. Furthermore, many critical mechanistic issues as to deadenylation kinetics, decay intermediates, and precursor-product relationships cannot be readily addressed by these methods. In light of these concerns, we have previously reported transcriptional pulsing methods based on the c-fos serum-inducible promoter and the tetracycline-regulated (Tet-off) promoter systems to better explain mechanisms of mRNA turnover in mammalian cells. In this chapter, we describe and discuss in detail different protocols that use these two transcriptional pulsing methods. The information described here also provides guidelines to help develop optimal protocols for studying mammalian mRNA turnover in different cell types under a wide range of physiologic conditions.
Resumo:
Background. There are two child-specific fracture classification systems for long bone fractures: the AO classification of pediatric long-bone fractures (PCCF) and the LiLa classification of pediatric fractures of long bones (LiLa classification). Both are still not widely established in comparison to the adult AO classification for long bone fractures. Methods. During a period of 12 months all long bone fractures in children were documented and classified according to the LiLa classification by experts and non-experts. Intraobserver and interobserver reliability were calculated according to Cohen (kappa). Results. A total of 408 fractures were classified. The intraobserver reliability for location in the skeletal and bone segment showed an almost perfect agreement (K=0.91-0.95) and also the morphology (joint/shaft fracture) (K=0.87-0.93). Due to different judgment of the fracture displacement in the second classification round, the intraobserver reliability of the whole classification revealed moderate agreement (K=0.53-0.58). Interobserver reliability showed moderate agreement (K=0.55) often due to the low quality of the X-rays. Further differences occurred due to difficulties in assigning the precise transition from metaphysis to diaphysis. Conclusions. The LiLa classification is suitable and in most cases user-friendly for classifying long bone fractures in children. Reliability is higher than in established fracture specific classifications and comparable to the AO classification of pediatric long bone fractures. Some mistakes were due to a low quality of the X-rays and some due to difficulties to classify the fractures themselves. Improvements include a more precise definition of the metaphysis and the kind of displacement. Overall the LiLa classification should still be considered as an alternative for classifying pediatric long bone fractures.