977 resultados para Technical evaluation of web interfaces
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.
Resumo:
This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.
Resumo:
Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.
Resumo:
To protect motorists and avoid tort liability, highway agencies expend considerable resources to repair damaged longitudinal barriers, such as w-beam guardrails. With limited funding available, though, highway agencies are unable to maintain all field-installed systems in the ideal as-built condition. Instead, these agencies focus on repairing only damage that has a detrimental effect on the safety performance of the barrier. The distinction between minor damage and more severe performance-altering damage, however, is not always clear. This paper presents a critical review of current United States (US) and Canadian criteria on whether to repair damaged longitudinal barrier. Barrier repair policies were obtained via comprehensive literature review and a survey of US and Canadian transportation agencies. In an analysis of the maintenance procedures of 40 US States and 8 Canadian transportation agencies, fewer than one-third of highway agencies were found to have quantitative measures to determine when barrier repair is warranted. In addition, no engineering basis for the current US barrier repair guidelines could be found. These findings underscore the importance of the development of quantitative barrier repair guidelines based on a strong technical foundation.
Resumo:
BACKGROUND AND PURPOSE: Currently, several new stent retriever devices for acute stroke treatment are under development and early clinical evaluation. Preclinical testing under standardized conditions is an important first step to evaluate the technical performance and potential of these devices. The aim of this study was to evaluate the immediate recanalization effect, recanalization efficacy, thrombus-device interaction, and safety of a new stent retriever intended for thrombectomy in patients with acute stroke. MATERIAL AND METHODS: The pREset thrombectomy device (4 × 20 mm) was evaluated in 16 vessel occlusions in an established swine model. Radiopaque thrombi (10-mm length) were used for visualization of thrombus-device interaction during application and retrieval. Flow-restoration effect immediately after deployment and after 5-minute embedding time before retrieval, recanalization rate after retrieval, thromboembolic events, and complications were assessed. High-resolution FPCT was performed to illustrate thrombus-device interaction during the embedding time. RESULTS: Immediate flow restoration was achieved in 75% of occlusions. An increase or stable percentage of recanalizations during embedding time before retrieval was seen in 56.3%; a decrease, in 12.5%; reocclusion of a previously recanalized vessel, in 18.8%; and no recanalization effect at all, in 12.5%. Complete recanalization (TICI 3) after retrieval was achieved in 93.8%; partial recanalization (TICI 2b), in 6.2%. No distal thromboembolic events were observed. High-resolution FPCT illustrated entrapment of the thrombus between the stent struts and compression against the contralateral vessel wall, leading to partial flow restoration. During retrieval, the thrombus was retained in a straight position within the stent struts. CONCLUSIONS: In this experimental study, the pREset thrombus retriever showed a high recanalization rate in vivo. High-resolution FPCT allows detailed illustration of the thrombus-device interaction during embedding time and is advocated as an add-on tool to the animal model used in this study.
Resumo:
Data on antimicrobial use play a key role in the development of policies for the containment of antimicrobial resistance. On-farm data could provide a detailed overview of the antimicrobial use, but technical and methodological aspects of data collection and interpretation, as well as data quality need to be further assessed. The aims of this study were (1) to quantify antimicrobial use in the study population using different units of measurement and contrast the results obtained, (2) to evaluate data quality of farm records on antimicrobial use, and (3) to compare data quality of different recording systems. During 1 year, data on antimicrobial use were collected from 97 dairy farms. Antimicrobial consumption was quantified using: (1) the incidence density of antimicrobial treatments; (2) the weight of active substance; (3) the used daily dose and (4) the used course dose for antimicrobials for intestinal, intrauterine and systemic use; and (5) the used unit dose, for antimicrobials for intramammary use. Data quality was evaluated by describing completeness and accuracy of the recorded information, and by comparing farmers' and veterinarians' records. Relative consumption of antimicrobials depended on the unit of measurement: used doses reflected the treatment intensity better than weight of active substance. The use of antimicrobials classified as high priority was low, although under- and overdosing were frequently observed. Electronic recording systems allowed better traceability of the animals treated. Recording drug name or dosage often resulted in incomplete or inaccurate information. Veterinarians tended to record more drugs than farmers. The integration of veterinarian and farm data would improve data quality.
Resumo:
BACKGROUND: In 2003 the Swiss federation of pharmacists organized a campaign "sleep disturbances--daytime sleepiness". The goal was to assist pharmacy clients in detecting likely causes of any sleep disturbance or daytime sleepiness through a free of charge screening, and to deliver targeted counselling. For pharmacy practice there are no screening or triage guidelines to assess the severity of sleep and wakefulness disturbances and potential causes for those disturbances. In this paper the outcome of the campaign in terms of feasibility, participation, observed response patterns, sale of over-the-counter (OTC) sleeping pills, and counselling activities is evaluated. METHODS: The Stanford sleep disorders questionnaire and the Epworth sleepiness scale served to identify patterns of symptoms suggestive of four major categories of sleep disorders. The questionnaires were posted on a web-site and the clients' data were entered online in the pharmacies. A report was automatically generated and immediately available online to the pharmacists. The pharmacists documented separately their counselling activities in a pharmacist's activity report. RESULTS: Six hundred and twenty-two (23%) of 2743 pharmacy clients had response patterns suggestive of obstructive sleep apnoea, 418 (15%) of restless-legs-syndrome, 39 (1%) of a sleep disorder potentially associated with a psychiatric condition and 79 (3%) of narcolepsy. An Epworth sleepiness score >10 points was found in 567 (21%). After screening, 2345 (86%) pharmacy clients received targeted counselling. Only 216 (8%) purchased an OTC sleeping pill and 704 (26%) were recommended to consult a physician, but of these, 446 (63%) were already under medical supervision. CONCLUSIONS: The online screening tool for sleep disorders and daytime sleepiness was successfully introduced in Swiss pharmacies. Pharmacies were able to assess the pattern of individual sleep disorders and to identify a possible cause in nearly one-third of the cases.
Resumo:
PURPOSE: In the present cohort study, overdentures with a combined root and implant support were evaluated and compared with either exclusively root- or implant-supported overdentures. Results of a 2-year follow-up period are reported, namely survival of implants, root copings, and prostheses, plus prosthetic complications, maintenance service, and patient satisfaction. MATERIALS AND METHODS: Fourteen patients were selected for the combined overdenture therapy and were compared with 2 patient groups in which either roots or implants provided overdenture support. Altogether, 14, 17, and 15 patients (in groups 1, 2, and 3, respectively) were matched with regard to age, sex, treatment time, and observation period. The mean age was around 67 years. Periodontal parameters were recorded, radiographs were taken, and all complications and failures were registered during the entire observation time. The patients answered a 9-item questionnaire by means of a visual analogue scale (VAS). RESULTS: One implant failed and 1 tooth root was removed following longitudinal root fracture. Periodontal/peri-implant parameters gave evidence of good oral hygiene for roots and implants, and slight crestal bone resorption was measured for both. Technical complications and service performed were significantly higher in the first year (P < .04) in all 3 groups and significantly higher in the tooth root group (P < .03). The results of the VAS indicated significantly lower scores for satisfaction, speaking ability, wearing comfort, and denture stability with combined or exclusive root support (P < .05 and .02, respectively). Initial costs of overdentures with combined or root support were 10% lower than for implant overdentures. CONCLUSION: The concept of combined root and implant support can be integrated into treatment planning and overdenture design for patients with a highly reduced dentition.
Resumo:
The final goal of mandibular reconstruction following ablative surgery for oral cancer is often considered to be dental implant-supported oral rehabilitation, for which bone grafts should ideally be placed in a suitable position taking subsequent prosthetic restoration into account. The aim of this study was to evaluate the efficacy of a standardized treatment strategy for mandibular reconstruction according to the size of the bony defect and planned subsequent dental prosthetic rehabilitation. Data of 56 patients, who had undergone such a systematic mandibular fibula free flap reconstruction, were retrospectively analyzed. Early complications were observed in 41.5% of the patients but only in those who had been irradiated. Late complications were found in 38.2%. Dental implant survival rate was 92%, and dental prosthetic treatment has been completed in all classes of bony defects with an overall success rate of 42.9%. The main reasons for failure of the complete dental reconstruction were patients' poor cooperation (30.4%) and tumour recurrence (14.3%) followed by surgery-related factors (10.8%) such as implant failure and an unfavourable intermaxillary relationship between the maxilla and the mandible. A comparison of our results with the literature findings revealed no marked differences in the complication rates and implant survival rates. However, a systematic concept for the reconstructive treatment like the method presented here, plays an important role in the successful completion of dental reconstruction. The success rate could still be improved by some technical progress in implant and bone graft positioning.
Resumo:
Electrochemical capacitors have been an important development in recent years in the field of energy storage. Capacitors can be developed by utilizing either double layer capacitance at the electrode/solution interfaces alone or in combination with a battery electrode associated with a faradic redox process in one electrode. An asymmetric capacitor consisting of electrochemically deposited nickel hydroxide, supported on carbon foam as a positive electrode and carbon sheet as a negative electrode has been successfully assembled and cycled. One objective of this study has been to demonstrate the viability of the nickel carbon foam positive electrode, especially in terms of cycle life. Electrochemical characterization shows stable, high cycle performance in 26 wt. % KOH electrolyte with a maximum energy density of 4.1 Wh/Kg and a relaxation time constant of 6.24 s. This cell has demonstrated high cycle life, 14,500 cycles, with efficiency better than 98%. In addition, the cell failure mechanism and self-discharge behavior of the aforesaid capacitor are analyzed.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.