896 resultados para voip , dispositivi mobili , portabilità , user-friendly
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
The identification and accurate location of centers of brain activity are vital both in neuro-surgery and brain research. This study aimed to provide a non-invasive, non-contact, accurate, rapid and user-friendly means of producing functional images intraoperatively. To this end a full field Laser Doppler imager was developed and integrated within the surgical microscope and perfusion images of the cortical surface were acquired during awake surgery whilst the patient performed a predetermined task. The regions of brain activity showed a clear signal (10-20% with respect to the baseline) related to the stimulation protocol which lead to intraoperative functional brain maps of strong statistical significance and which correlate well with the preoperative fMRI and intraoperative cortical electro-stimulation. These initial results achieved with a prototype device and wavelet based regressor analysis (the hemodynamic response function being derived from MRI applications) demonstrate the feasibility of LDI as an appropriate technique for intraoperative functional brain imaging.
Resumo:
CampusContent (CC) is a DFG-funded competence center for eLearning with its own portal. It links content and people who support sharing and reuse of high quality learning materials and codified pedagogical know-how, such as learning objectives, pedagogical scenarios, recommended learning activities, and learning paths. The heart of the portal is a distributed repository whose contents are linked to various other CampusContent portals. Integrated into each portal are user-friendly tools for designing reusable learning content, exercises, and templates for learning units and courses. Specialized authoring tools permit the configuration, adaption, and automatic generation of interactive Flash animations using Adobe's Flexbuilder technology. More coarse-grained content components such as complete learning units and entire courses, in which contents and materials taken from the repository are embedded, can be created with XML-based authoring tools. Open service interface allow the deep or shallow integration of the portal provider's preferred authoring and learning tools. The portal is built on top of the Enterprise Content Management System Alfresco, which comes with social networking functionality that has been adapted to accommmodate collaboration, sharing and reuse within trusted communities of practice.
Resumo:
Kriging-based optimization relying on noisy evaluations of complex systems has recently motivated contributions from various research communities. Five strategies have been implemented in the DiceOptim package. The corresponding functions constitute a user-friendly tool for solving expensive noisy optimization problems in a sequential framework, while offering some flexibility for advanced users. Besides, the implementation is done in a unified environment, making this package a useful device for studying the relative performances of existing approaches depending on the experimental setup. An overview of the package structure and interface is provided, as well as a description of the strategies and some insight about the implementation challenges and the proposed solutions. The strategies are compared to some existing optimization packages on analytical test functions and show promising performances.
Resumo:
For smart cities applications, a key requirement is to disseminate data collected from both scalar and multimedia wireless sensor networks to thousands of end-users. Furthermore, the information must be delivered to non-specialist users in a simple, intuitive and transparent manner. In this context, we present Sensor4Cities, a user-friendly tool that enables data dissemination to large audiences, by using using social networks, or/and web pages. The user can request and receive monitored information by using social networks, e.g., Twitter and Facebook, due to their popularity, user-friendly interfaces and easy dissemination. Additionally, the user can collect or share information from smart cities services, by using web pages, which also include a mobile version for smartphones. Finally, the tool could be configured to periodically monitor the environmental conditions, specific behaviors or abnormal events, and notify users in an asynchronous manner. Sensor4Cities improves the data delivery for individuals or groups of users of smart cities applications and encourages the development of new user-friendly services.
Resumo:
The ever increasing popularity of apps stems from their ability to provide highly customized services to the user. The flip side is that in order to provide such services, apps need access to very sensitive private information about the user. This leads to malicious apps that collect personal user information in the background and exploit it in various ways. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is a comprehensive and usable mobile privacy solution, one that not only protects the user's location information, but also other equally sensitive user data such as the user's contacts and documents. A solution that is usable by the average user who does not understand or care about the low level technical details. To bridge this gap, we propose privacy metrics that quantify low-level app accesses in terms of privacy impact and transforms them to high-level user understandable ratings. We also provide the design and architecture of our Privacy Panel app that represents the computed ratings in a graphical user-friendly format and allows the user to define policies based on them. Finally, experimental results are given to validate the scalability of the proposed solution.
Resumo:
The recognition of the importance of mRNA turnover in regulating eukaryotic gene expression has mandated the development of reliable, rigorous, and "user-friendly" methods to accurately measure changes in mRNA stability in mammalian cells. Frequently, mRNA stability is studied indirectly by analyzing the steady-state level of mRNA in the cytoplasm; in this case, changes in mRNA abundance are assumed to reflect only mRNA degradation, an assumption that is not always correct. Although direct measurements of mRNA decay rate can be performed with kinetic labeling techniques and transcriptional inhibitors, these techniques often introduce significant changes in cell physiology. Furthermore, many critical mechanistic issues as to deadenylation kinetics, decay intermediates, and precursor-product relationships cannot be readily addressed by these methods. In light of these concerns, we have previously reported transcriptional pulsing methods based on the c-fos serum-inducible promoter and the tetracycline-regulated (Tet-off) promoter systems to better explain mechanisms of mRNA turnover in mammalian cells. In this chapter, we describe and discuss in detail different protocols that use these two transcriptional pulsing methods. The information described here also provides guidelines to help develop optimal protocols for studying mammalian mRNA turnover in different cell types under a wide range of physiologic conditions.
Resumo:
Background. There are two child-specific fracture classification systems for long bone fractures: the AO classification of pediatric long-bone fractures (PCCF) and the LiLa classification of pediatric fractures of long bones (LiLa classification). Both are still not widely established in comparison to the adult AO classification for long bone fractures. Methods. During a period of 12 months all long bone fractures in children were documented and classified according to the LiLa classification by experts and non-experts. Intraobserver and interobserver reliability were calculated according to Cohen (kappa). Results. A total of 408 fractures were classified. The intraobserver reliability for location in the skeletal and bone segment showed an almost perfect agreement (K=0.91-0.95) and also the morphology (joint/shaft fracture) (K=0.87-0.93). Due to different judgment of the fracture displacement in the second classification round, the intraobserver reliability of the whole classification revealed moderate agreement (K=0.53-0.58). Interobserver reliability showed moderate agreement (K=0.55) often due to the low quality of the X-rays. Further differences occurred due to difficulties in assigning the precise transition from metaphysis to diaphysis. Conclusions. The LiLa classification is suitable and in most cases user-friendly for classifying long bone fractures in children. Reliability is higher than in established fracture specific classifications and comparable to the AO classification of pediatric long bone fractures. Some mistakes were due to a low quality of the X-rays and some due to difficulties to classify the fractures themselves. Improvements include a more precise definition of the metaphysis and the kind of displacement. Overall the LiLa classification should still be considered as an alternative for classifying pediatric long bone fractures.
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Resumo:
Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. Virtual reality (VR) environments have already been successfully used in cognitive rehabilitation and show increased potential for use in neuropsychological evaluation allowing for greater ecological validity while being more engaging and user friendly. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory, and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87% (90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD," the suggested VAP-M platform appears as an appealing alternative.
Resumo:
Operating room (OR) team safety training and learning in the field of dialysis access is well suited for the use of simulators, simulated case learning and root cause analysis of adverse outcomes. The objectives of OR team training are to improve communication and leadership skills, to use checklists and to prevent errors. Other objectives are to promote a change in the attitudes towards vascular access from learning through mistakes in a nonpunitive environment, to positively impact the employee performance and to increase staff retention by making the workplace safer, more efficient and user friendly.
Resumo:
BACKGROUND Implementation of user-friendly, real-time, electronic medical records for patient management may lead to improved adherence to clinical guidelines and improved quality of patient care. We detail the systematic, iterative process that implementation partners, Lighthouse clinic and Baobab Health Trust, employed to develop and implement a point-of-care electronic medical records system in an integrated, public clinic in Malawi that serves HIV-infected and tuberculosis (TB) patients. METHODS Baobab Health Trust, the system developers, conducted a series of technical and clinical meetings with Lighthouse and Ministry of Health to determine specifications. Multiple pre-testing sessions assessed patient flow, question clarity, information sequencing, and verified compliance to national guidelines. Final components of the TB/HIV electronic medical records system include: patient demographics; anthropometric measurements; laboratory samples and results; HIV testing; WHO clinical staging; TB diagnosis; family planning; clinical review; and drug dispensing. RESULTS Our experience suggests that an electronic medical records system can improve patient management, enhance integration of TB/HIV services, and improve provider decision-making. However, despite sufficient funding and motivation, several challenges delayed system launch including: expansion of system components to include of HIV testing and counseling services; changes in the national antiretroviral treatment guidelines that required system revision; and low confidence to use the system among new healthcare workers. To ensure a more robust and agile system that met all stakeholder and user needs, our electronic medical records launch was delayed more than a year. Open communication with stakeholders, careful consideration of ongoing provider input, and a well-functioning, backup, paper-based TB registry helped ensure successful implementation and sustainability of the system. Additional, on-site, technical support provided reassurance and swift problem-solving during the extended launch period. CONCLUSION Even when system users are closely involved in the design and development of an electronic medical record system, it is critical to allow sufficient time for software development, solicitation of detailed feedback from both users and stakeholders, and iterative system revisions to successfully transition from paper to point-of-care electronic medical records. For those in low-resource settings, electronic medical records for integrated care is a possible and positive innovation.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
CIPWFULL is a user-friendly, stand-alone FORTRAN software program that is designed to calculate the comprehensive CIPW normative mineral composition of igneous rocks and strictly adheres to the original formulation of the CIPW protocol. This faithful adherence alleviates inaccuracies in normative mineral calculations by programs commonly used by petrologists. Additionally, several of the most important petrological and mineralogical parameters of igneous rocks are calculated by the program. Along with all the regular major oxide elements, all the significant minor elements whose contents can potentially effect the CIPW normative mineral composition are included. CIPWFULL also calculates oxidation ratios for igneous rock samples that have only one oxidation state of iron reported in the specimen analysis. It also provides an option for normalization of analyses to unity on a hydrous-free basis in order to facilitate comparison of norms among rock groups. Other capabilities of the program cater for rare situations, like the presence of cancrinite or exclusion from the norm calculation of rare rocks like carbonatite. Several mineralogical, petrological and discriminatory parameters and indexes are additionally calculated by the CIPWFULL program. The CIPWFULL program is very efficient and flexible and allows for a user-defined free-format input of all the chemical species, and it permits feeding of minor elements as parts per million or oxide percentages. Results of calculations are printed in a formatted ASCII text file and may be optionally casted into a space-delimited text files that are ready to be imported to general spreadsheet programs. CIPWFULL is DOS-based and is implemented on WINDOWS and mainframe platforms.