990 resultados para Controlled experiment
Resumo:
While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
The results of empirical studies are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies in software engineering can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining existing studies, conducting power analyses for an accurate minimum sample size and benefiting from previous studies through replication. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of verification and validation (V&V) of concurrent Java components. The combination of these V&V technologies was shown to be cost-effective despite the size of the study, which thus contributes to research in V&V technology evaluation.
Resumo:
The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.
Resumo:
The dissertation takes a multivariate approach to answer the question of how applicant age, after controlling for other variables, affects employment success in a public organization. In addition to applicant age, there are five other categories of variables examined: organization/applicant variables describing the relationship of the applicant to the organization; organization/position variables describing the target position as it relates to the organization; episodic variables such as applicant age relative to the ages of competing applicants; economic variables relating to the salary needs of older applicants; and cognitive variables that may affect the decision maker's evaluation of the applicant. ^ An exploratory phase of research employs archival data from approximately 500 decisions made in the past three years to hire or promote applicants for positions in one public health administration organization. A logit regression model is employed to examine the probability that the variables modify the effect of applicant age on employment success. A confirmatory phase of the dissertation is a controlled experiment in which hiring decision makers from the same public organization perform a simulated hiring decision exercise to evaluate hypothetical applicants of similar qualifications but of different ages. The responses of the decision makers to a series of bipolar adjective scales add support to the cognitive component of the theoretical model of the hiring decision. A final section contains information gathered from interviews with key informants. ^ Applicant age has tended to have a curvilinear relationship with employment success. For some positions, the mean age of the applicants most likely to succeed varies with the values of the five groups of moderating variables. The research contributes not only to the practice of public personnel administration, but is useful in examining larger public policy issues associated with an aging workforce. ^
Resumo:
The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.
Resumo:
Wireless Sensor and Actuator Networks (WSAN) are a key component in Ubiquitous Computing Systems and have many applications in different knowledge domains. Programming for such networks is very hard and requires developers to know the available sensor platforms specificities, increasing the learning curve for developing WSAN applications. In this work, an MDA (Model-Driven Architecture) approach for WSAN applications development called ArchWiSeN is proposed. The goal of such approach is to facilitate the development task by providing: (i) A WSAN domain-specific language, (ii) a methodology for WSAN application development; and (iii) an MDA infrastructure composed of several software artifacts (PIM, PSMs and transformations). ArchWiSeN allows the direct contribution of domain experts in the WSAN application development without the need of specialized knowledge on WSAN platforms and, at the same time, allows network experts to manage the application requirements without the need for specific knowledge of the application domain. Furthermore, this approach also aims to enable developers to express and validate functional and non-functional requirements of the application, incorporate services offered by WSAN middleware platforms and promote reuse of the developed software artifacts. In this sense, this Thesis proposes an approach that includes all WSAN development stages for current and emerging scenarios through the proposed MDA infrastructure. An evaluation of the proposal was performed by: (i) a proof of concept encompassing three different scenarios performed with the usage of the MDA infrastructure to describe the WSAN development process using the application engineering process, (ii) a controlled experiment to assess the use of the proposed approach compared to traditional method of WSAN application development, (iii) the analysis of ArchWiSeN support of middleware services to ensure that WSAN applications using such services can achieve their requirements ; and (iv) systematic analysis of ArchWiSeN in terms of desired characteristics for MDA tool when compared with other existing MDA tools for WSAN.
Resumo:
One of the most common forms of reuse is through API usage. However, one of the main challenges to effective usage is an accessible and easy to understand documentation. Several papers have proposed alternatives to make more understandable API documentation, or even more detailed. However, these studies have not taken into account the complexity of understanding of the examples to make these documentations adaptable to different levels of experience of developers. In this work we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organizing them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome of the tutorials was positive on several characteristics, showing the feasibility of the use of tutorials generated automatically. In addition, the use of criteria for presentation of tutorial elements in order of complexity, the separation of the tutorial in basic and advanced parts, the nature of tutorial to the selected posts and existence of didactic source had significantly different results regarding a chosen generation methodology. A second study compared the official documentation of the Android API and tutorial generated by the best methodology of the previous study. A controlled experiment was conducted with students who had a first contact with the Android development. In the experiment these students developed two tasks, one using the official documentation of Android and using the generated tutorial. The results of this experiment showed that in most cases, the students had the best performance in tasks when they used the tutorial proposed in this work. The main reasons for the poor performance of students in tasks using the official API documentation were due to lack of usage examples, as well as its difficult use.
Resumo:
The economic rationale for public intervention into private markets through price mechanisms is twofold: to correct market failures and to redistribute resources. Financial incentives are one such price mechanism. In this dissertation, I specifically address the role of financial incentives in providing social goods in two separate contexts: a redistributive policy that enables low income working families to access affordable childcare in the US and an experimental pay-for-performance intervention to improve population health outcomes in rural India. In the first two papers, I investigate the effects of government incentives for providing grandchild care on grandmothers’ short- and long-term outcomes. In the third paper, coauthored with Manoj Mohanan, Grant Miller, Katherine Donato, and Marcos Vera-Hernandez, we use an experimental framework to consider the the effects of financial incentives in improving maternal and child health outcomes in the Indian state of Karnataka.
Grandmothers provide a significant amount of childcare in the US, but little is known about how this informal, and often uncompensated, time transfer impacts their economic and health outcomes. The first two chapters of this dissertation address the impact of federally funded, state-level means-tested programs that compensate grandparent-provided childcare on the retirement security of older women, an economically vulnerable group of considerable policy interest. I use the variation in the availability and generosity of childcare subsidies to model the effect of government payments for grandchild care on grandmothers’ time use, income, earnings, interfamily transfers, and health outcomes. After establishing that more generous government payments induce grandmothers to provide more hours of childcare, I find that grandmothers adjust their behavior by reducing their formal labor supply and earnings. Grandmothers make up for lost earnings by claiming Social Security earlier, increasing their reliance on Supplemental Security Income (SSI) and reducing financial transfers to their children. While the policy does not appear to negatively impact grandmothers’ immediate economic well-being, there are significant costs to the state, in terms of both up-front costs for care payments and long-term costs as a result of grandmothers’ increased reliance on social insurance.
The final paper, The Role of Non-Cognitive Traits in Response to Financial Incentives: Evidence from a Randomized Control Trial of Obstetrics Care Providers in India, is coauthored with Manoj Mohanan, Grant Miller, Katherine Donato and Marcos Vera-Hernandez. We report the results from “Improving Maternal and Child Health in India: Evaluating Demand and Supply Side Strategies” (IMACHINE), a randomized controlled experiment designed to test the effectiveness of supply-side incentives for private obstetrics care providers in rural Karnataka, India. In particular, the experimental design compares two different types of incentives: (1) those based on the quality of inputs providers offer their patients (inputs contracts) and (2) those based on the reduction of incidence of four adverse maternal and neonatal health outcomes (outcomes contracts). Along with studying the relative effectiveness of the different financial incentives, we also investigate the role of provider characteristics, preferences, expectations and non-cognitive traits in mitigating the effects of incentive contracts.
We find that both contract types input incentive contracts reduce rates of post-partum hemorrhage, the leading cause of maternal mortality in India by about 20%. We also find some evidence of multitasking as output incentive contract providers reduce the level of postnatal newborn care received by their patients. We find that patient health improvements in response to both contract types are concentrated among higher trained providers. We find improvements in patient care to be concentrated among the lower trained providers. Contrary to our expectations, we also find improvements in patient health to be concentrated among the most risk averse providers, while more patient providers respond relatively little to the incentives, and these difference are most evident in the outputs contract arm. The results are opposite for patient care outcomes; risk averse providers have significantly lower rates of patient care and more patient providers provide higher quality care in response to the outputs contract. We find evidence that overconfidence among providers about their expectations about possible improvements reduces the effectiveness of both types of incentive contracts for improving both patient outcomes and patient care. Finally, we find no heterogeneous response based on non-cognitive traits.
Resumo:
Further steps are needed to establish feasible alleviation strategies that are able to reduce the impacts of ocean acidification, whilst ensuring minimal biological side-effects in the process. Whilst there is a growing body of literature on the biological impacts of many other carbon dioxide reduction techniques, seemingly little is known about enhanced alkalinity. For this reason, we investigated the potential physiological impacts of using chemical sequestration as an alleviation strategy. In a controlled experiment, Carcinus maenas were acutely exposed to concentrations of Ca(OH)2 that would be required to reverse the decline in ocean surface pH and return it to pre-industrial levels. Acute exposure significantly affected all individuals' acid-base balance resulting in slight respiratory alkalosis and hyperkalemia, which was strongest in mature females. Although the trigger for both of these responses is currently unclear, this study has shown that alkalinity addition does alter acid-base balance in this comparatively robust crustacean species.
Resumo:
A long-term (10 months) controlled experiment was conducted to test the impact of increased partial pressure of carbon dioxide (pCO2) on common calcifying coral reef organisms. The experiment was conducted in replicate continuous flow coral reef mesocosms flushed with unfiltered sea water from Kaneohe Bay, Oahu, Hawaii. Mesocosms were located in full sunlight and experienced diurnal and seasonal fluctuations in temperature and sea water chemistry characteristic of the adjacent reef flat. Treatment mesocosms were manipulated to simulate an increase in pCO2 to levels expected in this century [midday pCO2 levels exceeding control mesocosms by 365 ± 130 µatm (mean ± sd)]. Acidification had a profound impact on the development and growth of crustose coralline algae (CCA) populations. During the experiment, CCA developed 25% cover in the control mesocosms and only 4% in the acidified mesocosms, representing an 86% relative reduction. Free-living associations of CCA known as rhodoliths living in the control mesocosms grew at a rate of 0.6 g buoyant weight per year while those in the acidified experimental treatment decreased in weight at a rate of 0.9 g buoyant weight per year, representing a 250% difference. CCA play an important role in the growth and stabilization of carbonate reefs, so future changes of this magnitude could greatly impact coral reefs throughout the world. Coral calcification decreased between 15% and 20% under acidified conditions. Linear extension decreased by 14% under acidified conditions in one experiment. Larvae of the coral Pocillopora damicornis were able to recruit under the acidified conditions. In addition, there was no significant difference in production of gametes by the coral Montipora capitata after 6 months of exposure to the treatments.
Resumo:
Following and contributing to the ongoing shift from more structuralist, system-oriented to more pragmatic, socio-cultural oriented anglicism research, this paper verifies to what extent the global spread of English affects naming patterns in Flanders. To this end, a diachronic database of first names is constructed, containing the top 75 most popular boy and girl names from 2005 until 2014. In a first step, the etymological background of these names is documented and the evolution in popularity of the English names in the database is tracked. Results reveal no notable surge in the preference for English names. This paper complements these database-driven results with an experimental study, aiming to show how associations through referents are in this case more telling than associations through phonological form (here based on etymology). Focusing on the socio-cultural background of first names in general and of Anglo-American pop culture in particular, the second part of the study specifically reports on results from a survey where participants are asked to name the first three celebrities that leap to mind when hearing a certain first name (e.g. Lana, triggering the response Del Rey). Very clear associations are found between certain first names and specific celebrities from Anglo-American pop culture. Linking back to marketing research and the social turn in onomastics, we will discuss how these celebrities might function as referees, and how social stereotypes surrounding these referees are metonymically attached to their first names. Similar to the country-of-origin-effect in marketing, these metonymical links could very well be the reason why parents select specific “celebrity names”. Although further attitudinal research is needed, this paper supports the importance of including socio-cultural parameters when conducting onomastic research.
Resumo:
This controlled experiment examined how academic achievement and cognitive, emotional and social aspects of perceived learning are affected by the level of medium naturalness (face-to-face, one-way and two-way videoconferencing) and by learners’ personality traits (extroversion–introversion and emotional stability–neuroticism). The Media Naturalness Theory explains the degree of medium naturalness by comparing its characteristics to face-to-face communication, considered to be the most natural form of communication. A total of 76 participants were randomly assigned to three experimental conditions: face-to-face, one-way and two-way videoconferencing. E-learning conditions were conducted through Zoom videoconferencing, which enables natural and spontaneous communication. Findings shed light on the trade-off involved in media naturalness: one-way videoconferencing, the less natural learning condition, enhanced the cognitive aspect of perceived learning but compromised the emotional and social aspects. Regarding the impact of personality, neurotic students tended to enjoy and succeed more in face-to-face learning, whereas emotionally stable students enjoyed and succeeded in all of the learning conditions. Extroverts tended to enjoy more natural learning environments but had lower achievements in these conditions. In accordance with the ‘poor get richer’ principle, introverts enjoyed environments with a low level of medium naturalness. However, they remained focused and had higher achievements in the face-to-face learning.
Resumo:
This report examines the results of a pilot study, which used a method of evaluation called randomised control trials (RCTs) to see if a popular business support scheme called Creative Credits worked effectively. The pilot study, which began in Manchester in 2009, was structured so that vouchers, or 'Creative Credits', would be randomly allocated to small and medium-sized businesses applying to invest in creative projects such as developing websites, video production and creative marketing campaigns, to see if they had a real effect on innovation. The research found that the firms who were awarded Creative Credits enjoyed a short-term boost in their innovation and sales growth in the six months following completion of their creative projects. However, the positive effects were not sustained, and after 12 months there was no longer a statistically significant difference between the groups that received the credits and those that didn’t. The report argues that these results would have remained hidden using the normal evaluation methods used by government, and calls for RCTs to be used more widely when evaluating policies to support business growth.
Resumo:
Compaction control using lightweight deflectometers (LWD) is currently being evaluated in several states and countries and fully implemented for pavement construction quality assurance (QA) by a few. Broader implementation has been hampered by the lack of a widely recognized standard for interpreting the load and deflection data obtained during construction QA testing. More specifically, reliable and practical procedures are required for relating these measurements to the fundamental material property—modulus—used in pavement design. This study presents a unique set of data and analyses for three different LWDs on a large-scale controlled-condition experiment. Three 4.5x4.5 m2 test pits were designed and constructed at target moisture and density conditions simulating acceptable and unacceptable construction quality. LWD testing was performed on the constructed layers along with static plate loading testing, conventional nuclear gauge moisture-density testing, and non-nuclear gravimetric and volumetric water content measurements. Additional material was collected for routine and exploratory tests in the laboratory. These included grain size distributions, soil classification, moisture-density relations, resilient modulus testing at optimum and field conditions, and an advanced experiment of LWD testing on top of the Proctor compaction mold. This unique large-scale controlled-condition experiment provides an excellent high quality resource of data that can be used by future researchers to find a rigorous, theoretically sound, and straightforward technique for standardizing LWD determination of modulus and construction QA for unbound pavement materials.
Resumo:
Considering the increasing popularity of network-based control systems and the huge adoption of IP networks (such as the Internet), this paper studies the influence of network quality of service (QoS) parameters over quality of control parameters. An example of a control loop is implemented using two LonWorks networks (CEA-709.1) interconnected by an emulated IP network, in which important QoS parameters such as delay and delay jitter can be completely controlled. Mathematical definitions are provided according to the literature, and the results of the network-based control loop experiment are presented and discussed.