974 resultados para T-way testing
Resumo:
Prenatal diagnosis is traditionally made via invasive procedures such as amniocentesis and chorionic villus sampling (CVS). However, both procedures carry a risk of complications, including miscarriage. Many groups have spent years searching for a way to diagnose a chromosome aneuploidy without putting the fetus or the mother at risk for complications. Non-invasive prenatal testing (NIPT) for chromosome aneuploidy became commercially available in the fall of 2011, with detection rates similar to those of invasive procedures for the common autosomal aneuploidies (Palomaki et al., 2011; Ashoor et al. 2012; Bianchi et al. 2012). Eventually NIPT may become the diagnostic standard of care and reduce invasive procedure-related losses (Palomaki et al., 2011). The integration of NIPT into clinical practice has potential to revolutionize prenatal diagnosis; however, it also raises some crucial issues for practitioners. Now that the test is clinically available, no studies have looked at the physicians that will be ordering the testing or referring patients to practitioners who do. This study aimed to evaluate the attitudes of OB/GYN’s and how they are incorporating the test into clinical practice. Our study shows that most physicians are offering this new, non-invasive technology to their patients, and that their practices were congruent with the literature and available professional society opinions. Those physicians who do not offer NIPT to their patients would like more literature on the topic as well as instructive guidelines from their professional societies. Additionally, this study shows that the practices and attitudes of MFMs and OBs differ. Our population feels that the incorporation of NIPT will change their practices by lowering the amount of invasive procedures, possibly replacing maternal serum screening, and that it will simplify prenatal diagnosis. However, those physicians who do not offer NIPT to their patients are not quite sure how the test will affect their clinical practice. From this study we are able to glean how physicians are incorporating this new technology into their practice and how they feel about the addition to their repertoire of tests. This knowledge gives insight as to how to best move forward with the quickly changing field of prenatal diagnosis.
Resumo:
Thin film photovoltaic (TF) modules have gained importance in the photovoltaic (PV) market. New PV plants increasingly use TF technologies. In order to have a reliable sample of a PV module population, a huge number of modules must be measured. There is a big variety of materials used in TF technology. Some of these modules are made of amorphous or microcrystalline silicon. Other are made of CIS or CdTe. Not all these materials respond the same under standard test conditions (STC) of power measurement. Power rates of the modules may vary depending on both the extent and the history of sunlight exposure. Thus, it is necessary a testing method adapted to each TF technology. This test must guarantee repeatability of measurements of generated power. This paper shows responses of different commercial TF PV modules to sunlight exposure. Several test procedures were performed in order to find the best methodology to obtain measurements of TF PV modules at STC in the easiest way. A methodology for indoor measurements adapted to these technologies is described.
Resumo:
In this work a WSN Support Tool for developing, testing, monitoring and debugging new application prototypes in a reliable and robust way is proposed, by combining a Hardware -Software Integration Platform with the implementation of a parallel communication channel that helps users to interact to the experiments in runtime without interfering in the operation of the wireless network. As a pre-deployment tool, prototypes can be validated in a real environment before implementing them in the final application, aiming to increase the effectiveness and efficiency of the technology. This infrastructure is the support of CookieLab: a WSN testbed based on the Cookie Nodes Platform.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
The search for new energy models arises as a necessity to have a sustainable power supply. The inclusion of distributed generation sources (DG) allows to reduce the cost of facilities, increase the security of the grid or alleviate problems of congestion through the redistribution of power flows. In remote microgrids it is needed in a particular way a safe and reliable supply, which can cover the demand for a low cost; due to this, distributed generation is an alternative that is being widely introduced in these grids. But the remote microgrids are especially weak grids because of their small size, low voltage level, reduced network mesh and distribution lines with a high ratio R/X. This ratio affects the coupling between grid voltages and phase shifts, and stability becomes an issue of greater importance than in interconnected systems. To ensure the appropriate behavior of generation sources inserted in remote microgrids -and, in general, any electrical equipment-, it is essential to have devices for testing and certification. These devices must, not only faithfully reproduce disturbances occurring in remote microgrids, but also to behave against the equipment under test (EUT) as a real weak grid. This also makes the device commercially competitive. To meet these objectives and based on the aforementioned, it has been designed, built and tested a voltage disturbances generator, in order to provide a simple, versatile, full and easily scalable device to manufacturers and laboratories in the sector.
Resumo:
The acyclic nucleoside phosphonate analog 9-(2-phosphonylmethoxyethyl)adenine (PMEA) was recently found to be effective as an inhibitor of visna virus replication and cytopathic effect in sheep choroid plexus cultures. To study whether PMEA also affects visna virus infection in sheep, two groups of four lambs each were inoculated intracerebrally with 10(6.3) TCID50 of visna virus strain KV1772 and treated subcutaneously three times a week with PMEA at 10 and 25 mg/kg, respectively. The treatment was begun on the day of virus inoculation and continued for 6 weeks. A group of four lambs were infected in the same way but were not treated. The lambs were bled weekly or biweekly and the leukocytes were tested for virus. At 7 weeks after infection, the animals were sacrificed, and cerebrospinal fluid (CSF) and samples of tissue from various areas of the brain and from lungs, spleen, and lymph nodes were collected for isolation of virus and for histopathologic examination. The PMEA treatment had a striking effect on visna virus infection, which was similar for both doses of the drug. Thus, the frequency of virus isolations was much lower in PMEA-treated than in untreated lambs. The difference was particularly pronounced in the blood, CSF, and brain tissue. Furthermore, CSF cell counts were much lower and inflammatory lesions in the brain were much less severe in the treated lambs than in the untreated controls. The results indicate that PMEA inhibits the propagation and spread of visna virus in infected lambs and prevents brain lesions, at least during early infection. The drug caused no noticeable side effects during the 6 weeks of treatment.
Resumo:
This paper tests the existence of ‘reference dependence’ and ‘loss aversion’ in students’ academic performance. Accordingly, achieving a worse than expected academic performance would have a much stronger effect on students’ (dis)satisfaction than obtaining a better than expected grade. Although loss aversion is a well-established finding, some authors have demonstrated that it can be moderated – diminished, to be precise–. Within this line of research, we also examine whether the students’ emotional response (satisfaction/dissatisfaction) to their performance can be moderated by different musical stimuli. We design an experiment through which we test loss aversion in students’ performance with three conditions: ‘classical music’, ‘heavy music’ and ‘no music’. The empirical application supports the reference-dependence and loss aversion hypotheses (significant at p < 0.05), and the musical stimuli do have an influence on the students’ state of satisfaction with the grades (at p < 0.05). Analyzing students’ perceptions is vital to find the way they process information. Particularly, knowing the elements that can favour not only the academic performance of students but also their attitude towards certain results is fundamental. This study demonstrates that musical stimuli can modify the perceptions of a certain academic result: the effects of ‘positive’ and ‘negative’ surprises are higher or lower, not only in function of the size of these surprises, but also according to the musical stimulus received.
Resumo:
In the EU circuit (especially the European Parliament, the Council and Coreper) as well as in national parliaments of the EU Member States, one observes a powerful tendency to regard 'subsidiarity' as a 'political' issue. Moreover, subsidiarity is frequently seen as a one-way street : powers going 'back to' Member States. Both interpretations are at least partly flawed and less than helpful when looking for practical ways to deal with subsidiarity at both EU and Member states' levels. The present paper shows that subsidiarity as a principle is profoundly 'functional' in nature and, hence, is and must be a two-way principle. A functional subsidiarity test is developed and its application is illustrated for a range of policy issues in the internal market in its widest sense, for equity and for macro-economic stabilisation questions in European integration. Misapplications of 'subsidiarity' are also demonstrated. For a good understanding, subsidiarity being a functional, two-way principle neither means that elected politicians should not have the final (political!) say (for which they are accountable), nor that subsidiarity tests, even if properly conducted, cannot and will not be politicised once the results enter the policy debate. Such politicisation forms a natural run-up to the decision-making by those elected for it. But the quality and reasoning of the test as well as structuring the information in a logical sequence ( in accordance with the current protocol and with the one in the constitutional treaty) is likely to be directly helpful for decisionmakers, confronted with complicated and often specialised proposals. EU debates and decision-making is therefore best served by separating the functional subsidiarity test (prepared by independent professionals) from the final political decision itself. If the test were accepted Union-wide, it would also assist national parliaments in conducting comparable tests in a relatively short period, as the basis for possible joint action (as suggested by the constitutional treaty). The core of the paper explains how the test is formulated and applied. A functional approach to subsidiarity in the framework of European representative democracy seeks to find the optimal assignment of regulatory or policy competences to the various tiers of government. In the final analysis, this is about structures facilitating the highest possible welfare in the Union, in the fundamental sense that preferences and needs are best satisfied. What is required for such an analysis is no less than a systematic cost/benefit framework to assess the (de)merits of (de)centralisation in the EU.
Resumo:
Questions regarding oil spills remain high on the political agenda. Legal scholars, legislators as well as the international, European and national Courts struggle to determine key issues, such as who is to be held liable for oil spills, under which conditions and for which damage. The international regime on oil spills was meant to establish an “equilibrium” between the needs of the victims (being compensated for their harm) and the needs of the economic actors (being able to continue their activities). There is, however, a constantly increasing array of legal scholars’ work that criticizes the regime. Indeed, the victims of a recent oil spill, the Erika, have tried to escape the international regime on oil spills and to rely instead on the provisions of national criminal law or EC waste legislation. In parallel, the EC legislator has questioned the sufficiency of the international regime, as it has started preparing legislative acts of its own. One can in fact wonder whether challenging the international liability regime with the European Convention on Human Rights could prove to be a way forward, both for the EC regulators as well as the victims of oil spills. This paper claims that the right to property, as enshrined in Article P1-1 of the Human Rights Convention, could be used to challenge the limited environmental liability provisions of the international frameworks.
Resumo:
Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.
Resumo:
In the quest to secure the much vaunted benefits of North Sea oil, highly non-incremental technologies have been adopted. Nowhere is this more the case than with the early fields of the central and northern North Sea. By focusing on the inflexible nature of North Sea hardware, in such fields, this thesis examines the problems that this sort of technology might pose for policy making. More particularly, the following issues are raised. First, the implications of non-incremental technical change for the successful conduct of oil policy is raised. Here, the focus is on the micro-economic performance of the first generation of North Sea oil fields and the manner in which this relates to government policy. Secondly, the question is posed as to whether there were more flexible, perhaps more incremental policy alternatives open to the decision makers. Conclusions drawn relate to the degree to which non-incremental shifts in policy permit decision makers to achieve their objectives at relatively low cost. To discover cases where non-incremental policy making has led to success in this way, would be to falsify the thesis that decision makers are best served by employing incremental politics as an approach to complex problem solving.
Resumo:
A description of the background to testing friction materials for automotive brakes explains the need for a rapid, inexpensive means of assessing their behaviour in a way which is both accurate and meaningful. Various methods of controlling inertia dynamometers to simulate road vehicles are rejected in favour of programming by means of a commercially available XY plotter. Investigation of brake service conditions is used to set up test schedules, and a dynamometer programming unit built to enable service conditions on vehicles to be simulated on a full scale dynamometer. A technique is developed by which accelerated testing can be achieved without operating under overload conditions, saving time and cost without sacrificing validity. The development of programming by XY plotter is described, with a method of operating one XY plotter to programme the machine, monitor its own behaviour, and plot its own results in logical sequence. Commissioning trials are described and the generation of reproducible results in frictional behaviour and material durability is discussed. Teclmiques are developed to cross check the operation of the machine in retrospect, and retrospectively correct results in the event of malfunctions. Sensitivity errors in the measuring circuits are displayed between calibrations, whilst leaving the recorded results almost unaffected by error. Typical results of brake lining tests are used to demonstrate the range of performance parameters which can be studied by use of the machine. Successful test investigations completed on the machine are reported, including comments on behaviour of cast iron drums and discs. The machine shows that materials can repeat their complex friction/ temperature/speed/pressure relationships at a reproducibility of the order of +-0.003u and +~ 0.0002 in. thickness loss during wear tests. Discussion of practical and academic implications completes the report with recommendations for further work in both fields.
Resumo:
Objectives: To develop a tool for the accurate reporting and aggregation of findings from each of the multiple methods used in a complex evaluation in an unbiased way. Study Design and Setting: We developed a Method for Aggregating The Reporting of Interventions in Complex Studies (MATRICS) within a gastroenterology study [Evaluating New Innovations in (the delivery and organisation of) Gastrointestinal (GI) endoscopy services by the NHS Modernisation Agency (ENIGMA)]. We subsequently tested it on a different gastroenterology trial [Multi-Institutional Nurse Endoscopy Trial (MINuET)]. We created three layers to define the effects, methods, and findings from ENIGMA. We assigned numbers to each effect in layer 1 and letters to each method in layer 2. We used an alphanumeric code based on layers 1 and 2 to every finding in layer 3 to link the aims, methods, and findings. We illustrated analogous findings by assigning more than one alphanumeric code to a finding. We also showed that more than one effect or method could report the same finding. We presented contradictory findings by listing them in adjacent rows of the MATRICS. Results: MATRICS was useful for the effective synthesis and presentation of findings of the multiple methods from ENIGMA. We subsequently successfully tested it by applying it to the MINuET trial. Conclusion: MATRICS is effective for synthesizing the findings of complex, multiple-method studies.
Resumo:
This research provides data which investigates the feasibility of using fourth generation evaluation during the process of instruction. A semester length course entitled "Multicultural Communications", (PUR 5406/4934) was designed and used in this study, in response to the need for the communications profession to produce well-trained culturally sensitive practitioners for the work force and the market place. A revised pause model consisting of three one-on-one indepth interviews conducted outside of the class, three reflections periods during the class and a self-reflective essay prepared one week before the end of the course was analyzed. Narrative and graphic summaries of participant responses produced significant results. The revised pause model was found to be an effective evaluation method for use in multicultural education under certain conditions as perceived by the participants in the study. participant self-perceived behavior change and knowledge acquisition was identified through use of the revised pause model. Study results suggest that by using the revised pause model of evaluation, instructors teaching multicultural education in schools of journalism and mass communication is yet another way of enhancing their ability to become both the researcher and the research subject. In addition, the introduction of a qualitative model has been found to be a more useful way of generating participant involvement and introspection. Finally, the instructional design of the course used in the study provides communication educators with a practical way of preparing their students be effective communicators in a multicultural world.
Resumo:
Individuals of Hispanic origin are the nation's largest minority (13.4%). Therefore, there is a need for models and methods that are culturally appropriate for mental health research with this burgeoning population. This is an especially salient issue when applying family systems theories to Hispanics, who are heavily influenced by family bonds in a way that appears to be different from the more individualistic non-Hispanic White culture. Bowen asserted that his family systems' concept of differentiation of self, which values both individuality and connectedness, could be universally applied. However, there is a paucity of research systematically assessing the applicability of the differentiation of self construct in ethnic minority populations. ^ This dissertation tested a multivariate model of differentiation of self with a Hispanic sample. The manner in which the construct of differentiation of self was being assessed and how accurately it represented this particular ethnic minority group's functioning was examined. Additionally, the proposed model included key contextual variables (e.g., anxiety, relationship satisfaction, attachment and acculturation related variables) which have been shown to be related to the differentiation process. ^ The results from structural equation modeling (SEM) analyses confirmed and extended previous research, and helped to illuminate the complex relationships between key factors that need to be considered in order to better understand individuals with this cultural background. Overall results indicated that the manner in which Hispanic individuals negotiate the boundaries of interconnectedness with a sense of individual expression appears to be different from their non-Hispanic White counterparts in some important ways. These findings illustrate the need for research on Hispanic individuals that provides a more culturally sensitive framework. ^