978 resultados para Eddy Current Testing
Resumo:
Turbulence profile measurements made on the upper continental slope and shelf of the southeastern Weddell Sea reveal striking contrasts in dissipation and mixing rates between the two sites. The mean profiles of dissipation rates from the upper slope are 1-2 orders of magnitude greater than the profiles collected over the shelf in the entire water column. The difference increases toward the bottom where the dissipation rate of turbulent kinetic energy and the vertical eddy diffusivity on the slope exceed 10?7 W kg?1 and 10?2 m2 s?1, respectively. Elevated levels of turbulence on the slope are concentrated within a 100 m thick bottom layer, which is absent on the shelf. The upper slope is characterized by near-critical slopes and is in close proximity to the critical latitude for semidiurnal internal tides. Our observations suggest that the upper continental slope of the southern Weddell Sea is a generation site of semidiurnal internal tide, which is trapped along the slope along the critical latitude, and dissipates its energy in a inline image m thick layer near the bottom and within inline image km across the slope.
Resumo:
With the advent of the Universal Technical Standard for Solar Home Systems, procedures to test the compliance of SHS fluorescent lamps with the standard have been developed. Definition of the laboratory testing procedures is a necessary step in any lamp quality assurance procedure. Particular attention has been paid to test simplicity and to affordability, in order to facilitate local application of the testing procedures, for example by the organisations which carry out electrification programmes. The set of test procedures has been applied to a representative collection of 42 lamps from many different countries, directly acquired in the current photovoltaic rural electrification market. Tests apply to: lamp resistance under normal operating conditions; lamp reliability under extreme conditions; under abnormal conditions; and lamp luminosity. Results are discussed and some recommendations for updating the relevant standard are given. The selected technical standard, together with the proposed testing procedures, form the basis of a complete quality assurance tool that can be applied locally in normal electrical laboratories. Full testing of a lamp requires less than one month, which is very reasonable on the context of quality assurance programmes
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
Due to the particular characteristics of the fusion products, i.e. very short pulses (less than a few μs long for ions when arriving to the walls; less than 1 ns long for X-rays), very high fluences ( 10 13 particles/cm 2 for both ions and X rays photons) and broad particle energy spectra (up to 10 MeV ions and 100 keV photons), the laser fusion community lacks of facilities to accurately test plasma facing materials under those conditions. In the present work, the ability of ultraintese lasers to create short pulses of energetic particles and high fluences is addressed as a solution to reproduce those ion and X-ray bursts. Based on those parameters, a comparison between fusion ion and laser driven ion beams is presented and discussed, describing a possible experimental set-up to generate with lasers the appropriate ion pulses. At the same time, the possibility of generating X-ray or neutron beams which simulate those of laser fusion environments is also indicated and assessed under current laser intensities. It is concluded that ultraintense lasers should play a relevant role in the validation of materials for laser fusion facilities.
Resumo:
The aim of this work is to test the present status of Evaluated Nuclear Decay and Fission Yield Data Libraries to predict decay heat and delayed neutron emission rate, average neutron energy and neutron delayed spectra after a neutron fission pulse. Calculations are performed with JEFF-3.1.1 and ENDF/B-VII.1, and these are compared with experimental values. An uncertainty propagation assessment of the current nuclear data uncertainties is performed.
Resumo:
Recent developments in the area of multiscale modeling of fiber-reinforced polymers are presented. The overall strategy takes advantage of the separa-tion of length scales between different entities (ply, laminate, and component) found in composite structures. This allows us to carry out multiscale modeling by computing the properties of one entity (e.g., individual plies) at the relevant length scale, homogenizing the results into a constitutive model, and passing this information to the next length scale to determine the mechanical behavior of the larger entity (e.g., laminate). As a result, high-fidelity numerical sim-ulations of the mechanical behavior of composite coupons and small compo-nents are nowadays feasible starting from the matrix, fiber, and interface properties and spatial distribution. Finally, the roadmap is outlined for extending the current strategy to include functional properties and processing into the simulation scheme.
Resumo:
Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
It is our goal within this project to develop a powerful electronic system capable to claim, with high certainty, that a malicious software is running (or not) along with the workstations’ normal activity. The new product will be based on measurement of the supply current taken by a workstation from the grid. Unique technique is proposed within these proceedings that analyses the supply current to produce information about the state of the workstation and to generate information of the presence of malicious software running along with the rightful applications. The testing is based on comparison of the behavior of a fault-free workstation (established i advance) and the behavior of the potentially faulty device.
Resumo:
Context: Empirical Software Engineering (ESE) replication researchers need to store and manipulate experimental data for several purposes, in particular analysis and reporting. Current research needs call for sharing and preservation of experimental data as well. In a previous work, we analyzed Replication Data Management (RDM) needs. A novel concept, called Experimental Ecosystem, was proposed to solve current deficiencies in RDM approaches. The empirical ecosystem provides replication researchers with a common framework that integrates transparently local heterogeneous data sources. A typical situation where the Empirical Ecosystem is applicable, is when several members of a research group, or several research groups collaborating together, need to share and access each other experimental results. However, to be able to apply the Empirical Ecosystem concept and deliver all promised benefits, it is necessary to analyze the software architectures and tools that can properly support it.
Resumo:
In the EU circuit (especially the European Parliament, the Council and Coreper) as well as in national parliaments of the EU Member States, one observes a powerful tendency to regard 'subsidiarity' as a 'political' issue. Moreover, subsidiarity is frequently seen as a one-way street : powers going 'back to' Member States. Both interpretations are at least partly flawed and less than helpful when looking for practical ways to deal with subsidiarity at both EU and Member states' levels. The present paper shows that subsidiarity as a principle is profoundly 'functional' in nature and, hence, is and must be a two-way principle. A functional subsidiarity test is developed and its application is illustrated for a range of policy issues in the internal market in its widest sense, for equity and for macro-economic stabilisation questions in European integration. Misapplications of 'subsidiarity' are also demonstrated. For a good understanding, subsidiarity being a functional, two-way principle neither means that elected politicians should not have the final (political!) say (for which they are accountable), nor that subsidiarity tests, even if properly conducted, cannot and will not be politicised once the results enter the policy debate. Such politicisation forms a natural run-up to the decision-making by those elected for it. But the quality and reasoning of the test as well as structuring the information in a logical sequence ( in accordance with the current protocol and with the one in the constitutional treaty) is likely to be directly helpful for decisionmakers, confronted with complicated and often specialised proposals. EU debates and decision-making is therefore best served by separating the functional subsidiarity test (prepared by independent professionals) from the final political decision itself. If the test were accepted Union-wide, it would also assist national parliaments in conducting comparable tests in a relatively short period, as the basis for possible joint action (as suggested by the constitutional treaty). The core of the paper explains how the test is formulated and applied. A functional approach to subsidiarity in the framework of European representative democracy seeks to find the optimal assignment of regulatory or policy competences to the various tiers of government. In the final analysis, this is about structures facilitating the highest possible welfare in the Union, in the fundamental sense that preferences and needs are best satisfied. What is required for such an analysis is no less than a systematic cost/benefit framework to assess the (de)merits of (de)centralisation in the EU.
Resumo:
AIM Anthracycline-induced cardiotoxicity (ACT) occurs in 57% of treated patients and remains an important limitation of anthracycline-based chemotherapy. In various genetic association studies, potential genetic risk markers for ACT have been identified. Therefore, we developed evidence-based clinical practice recommendations for pharmacogenomic testing to further individualize therapy based on ACT risk. METHODS We followed a standard guideline development process; including a systematic literature search, evidence synthesis and critical appraisal, and the development of clinical practice recommendations with an international expert group. RESULTS RARG rs2229774, SLC28A3 rs7853758 and UGT1A6 rs17863783 variants currently have the strongest and the most consistent evidence for association with ACT. Genetic variants in ABCC1, ABCC2, ABCC5, ABCB1, ABCB4, CBR3, RAC2, NCF4, CYBA, GSTP1, CAT, SULT2B1, POR, HAS3, SLC22A7, SCL22A17, HFE and NOS3 have also been associated with ACT, but require additional validation. We recommend pharmacogenomic testing for the RARG rs2229774 (S427L), SLC28A3 rs7853758 (L461L) and UGT1A6*4 rs17863783 (V209V) variants in childhood cancer patients with an indication for doxorubicin or daunorubicin therapy (Level B - moderate). Based on an overall risk stratification, taking into account genetic and clinical risk factors, we recommend a number of management options including increased frequency of echocardiogram monitoring, follow-up, as well as therapeutic options within the current standard of clinical practice. CONCLUSIONS Existing evidence demonstrates that genetic factors have the potential to improve the discrimination between individuals at higher and lower risk of ACT. Genetic testing may therefore support both patient care decisions and evidence development for an improved prevention of ACT.
Resumo:
QUESTION Detection and treatment of infections during pregnancy are important for both maternal and child health. The objective of this study was to describe testing practices and adherence to current national guidelines in Switzerland. METHODS We invited all registered practicing obstetricians and gynaecologists in Switzerland to complete an anonymous web-based questionnaire about strategies for testing for 14 infections during pregnancy. We conducted a descriptive analysis according to demographic characteristics. RESULTS Of 1138 invited clinicians, 537 (47.2%) responded and 520 (45.6%) were eligible as they are currently caring for pregnant women. Nearly all eligible respondents tested all pregnant women for group B streptococcus (98.0%), hepatitis B virus (HBV) (96.5%) and human immunodeficiency virus (HIV) (94.7%), in accordance with national guidelines. Although testing for toxoplasmosis is not recommended, 24.1% of respondents tested all women and 32.9% tested at the request of the patient. Hospital doctors were more likely not to test for toxoplasmosis than doctors working in private practice (odds ratio [OR] 2.52, 95% confidence interval [CI] 1.04-6.13, p = 0.04). Only 80.4% of respondents tested all women for syphilis. There were regional differences in testing for some infections. The proportion of clinicians testing all women for HIV, HBV and syphilis was lower in Eastern Switzerland and the Zurich region (69.4% and 61.2%, respectively) than in other regions (range 77.1-88.1%, p <0.001). Most respondents (74.5%) said they would appreciate national guidelines about testing for infections during pregnancy. CONCLUSIONS Testing practices for infections in pregnant women vary widely in Switzerland. More extensive national guidelines could improve consistency of testing practices.
Resumo:
"May 1986."