921 resultados para test development
Resumo:
The Galway Bay wave energy test site promises to be a vital resource for wave energy researchers and developers. As part of the development of this site, a floating power system is being developed to provide power and data acquisition capabilities, including its function as a local grid connection, allowing for the connection of up to three wave energy converter devices. This work shows results from scaled physical model testing and numerical modelling of the floating power system and an oscillating water column connected with an umbilical. Results from this study will be used to influence further scaled testing as well as the full scale design and build of the floating power system in Galway Bay.
Resumo:
Background: Sickle Cell Disease (SCD) is a genetic hematological disorder that affects more than 7 million people globally (NHLBI, 2009). It is estimated that 50% of adults with SCD experience pain on most days, with 1/3 experiencing chronic pain daily (Smith et al., 2008). Persons with SCD also experience higher levels of pain catastrophizing (feelings of helplessness, pain rumination and magnification) than other chronic pain conditions, which is associated with increases in pain intensity, pain behavior, analgesic consumption, frequency and duration of hospital visits, and with reduced daily activities (Sullivan, Bishop, & Pivik, 1995; Keefe et al., 2000; Gil et al., 1992 & 1993). Therefore effective interventions are needed that can successfully be used manage pain and pain-related outcomes (e.g., pain catastrophizing) in persons with SCD. A review of the literature demonstrated limited information regarding the feasibility and efficacy of non-pharmacological approaches for pain in persons with SCD, finding an average effect size of .33 on pain reduction across measurable non-pharmacological studies. Second, a prospective study on persons with SCD that received care for a vaso-occlusive crisis (VOC; N = 95) found: (1) high levels of patient reported depression (29%) and anxiety (34%), and (2) that unemployment was significantly associated with increased frequency of acute care encounters and hospital admissions per person. Research suggests that one promising category of non-pharmacological interventions for managing both physical and affective components of pain are Mindfulness-based Interventions (MBIs; Thompson et al., 2010; Cox et al., 2013). The primary goal of this dissertation was thus to develop and test the feasibility, acceptability, and efficacy of a telephonic MBI for pain catastrophizing in persons with SCD and chronic pain.
Methods: First, a telephonic MBI was developed through an informal process that involved iterative feedback from patients, clinical experts in SCD and pain management, social workers, psychologists, and mindfulness clinicians. Through this process, relevant topics and skills were selected to adapt in each MBI session. Second, a pilot randomized controlled trial was conducted to test the feasibility, acceptability, and efficacy of the telephonic MBI for pain catastrophizing in persons with SCD and chronic pain. Acceptability and feasibility were determined by assessment of recruitment, attrition, dropout, and refusal rates (including refusal reasons), along with semi-structured interviews with nine randomly selected patients at the end of study. Participants completed assessments at baseline, Week 1, 3, and 6 to assess efficacy of the intervention on decreasing pain catastrophizing and other pain-related outcomes.
Results: A telephonic MBI is feasible and acceptable for persons with SCD and chronic pain. Seventy-eight patients with SCD and chronic pain were approached, and 76% (N = 60) were enrolled and randomized. The MBI attendance rate, approximately 57% of participants completing at least four mindfulness sessions, was deemed acceptable, and participants that received the telephonic MBI described it as acceptable, easy to access, and consume in post-intervention interviews. The amount of missing data was undesirable (MBI condition, 40%; control condition, 25%), but fell within the range of expected missing outcome data for a RCT with multiple follow-up assessments. Efficacy of the MBI on pain catastrophizing could not be determined due to small sample size and degree of missing data, but trajectory analyses conducted for the MBI condition only trended in the right direction and pain catastrophizing approached statistically significance.
Conclusion: Overall results showed that at telephonic group-based MBI is acceptable and feasible for persons with SCD and chronic pain. Though the study was not able to determine treatment efficacy nor powered to detect a statistically significant difference between conditions, participants (1) described the intervention as acceptable, and (2) the observed effect sizes for the MBI condition demonstrated large effects of the MBI on pain catastrophizing, mental health, and physical health. Replication of this MBI study with a larger sample size, active control group, and additional assessments at the end of each week (e.g., Week 1 through Week 6) is needed to determine treatment efficacy. Many lessons were learned that will guide the development of future studies including which MBI strategies were most helpful, methods to encourage continued participation, and how to improve data capture.
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
Calcification in many invertebrate species is predicted to decline due to ocean acidification. The potential effects of elevated CO2 and reduced carbonate saturation state on other species, such as fish, are less well understood. Fish otoliths (earbones) are composed of aragonite, and thus, might be susceptible to either the reduced availability of carbonate ions in seawater at low pH, or to changes in extracellular concentrations of bicarbonate and carbonate ions caused by acid-base regulation in fish exposed to high pCO2. We reared larvae of the clownfish Amphiprion percula from hatching to settlement at three pHNBS and pCO2 levels (control: ~pH 8.15 and 404 µatm CO2; intermediate: pH 7.8 and 1050 µatm CO2; extreme: pH 7.6 and 1721 µatm CO2) to test the possible effects of ocean acidification on otolith development. There was no effect of the intermediate treatment (pH 7.8 and 1050 µatm CO2) on otolith size, shape, symmetry between left and right otoliths, or otolith elemental chemistry, compared with controls. However, in the more extreme treatment (pH 7.6 and 1721 µatm CO2) otolith area and maximum length were larger than controls, although no other traits were significantly affected. Our results support the hypothesis that pH regulation in the otolith endolymph can lead to increased precipitation of CaCO3 in otoliths of larval fish exposed to elevated CO2, as proposed by an earlier study, however, our results also show that sensitivity varies considerably among species. Importantly, our results suggest that otolith development in clownfishes is robust to even the more pessimistic changes in ocean chemistry predicted to occur by 2100.
Resumo:
A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.
Resumo:
Sensors for real-time monitoring of environmental contaminants are essential for protecting ecosystems and human health. Refractive index sensing is a non-selective technique that can be used to measure almost any analyte. Miniaturized refractive index sensors, such as silicon-on-insulator (SOI) microring resonators are one possible platform, but require coatings selective to the analytes of interest. A homemade prism refractometer is reported and used to characterize the interactions between polymer films and liquid or vapour-phase analytes. A camera was used to capture both Fresnel reflection and total internal reflection within the prism. For thin-films (d = 10 μm - 100 μm), interference fringes were also observed. Fourier analysis of the interferogram allowed for simultaneous extraction of the average refractive index and film thickness with accuracies of ∆n = 1-7 ×10-4 and ∆d < 3-5%. The refractive indices of 29 common organic solvents as well as aqueous solutions of sodium chloride, sucrose, ethylene glycol, glycerol, and dimethylsulfoxide were measured at λ = 1550 nm. These measurements will be useful for future calibrations of near-infrared refractive index sensors. A mathematical model is presented, where the concentration of analyte adsorbed in a film can be calculated from the refractive index and thickness changes during uptake. This model can be used with Fickian diffusion models to measure the diffusion coefficients through the bulk film and at the film-substrate interface. The diffusion of water and other organic solvents into SU-8 epoxy was explored using refractometry and the diffusion coefficient of water into SU-8 is presented. Exposure of soft baked SU-8 films to acetone, acetonitrile and methanol resulted in rapid delamination. The diffusion of volatile organic compound (VOC) vapours into polydimethylsiloxane and polydimethyl-co-polydiphenylsiloxane polymers was also studied using refractometry. Diffusion and partition coefficients are reported for several analytes. As a model system, polydimethyl-co-diphenylsiloxane films were coated onto SOI microring resonators. After the development of data acquisition software, coated devices were exposed to VOCs and the refractive index response was assessed. More studies with other polymers are required to test the viability of this platform for environmental sensing applications.
Resumo:
As identified by Griffin (1997) and Kahn (2012), manufacturing organisations typically improve their market position by accelerating their product development (PD) cycles. One method for achieving this is to reduce the time taken to design, test and validate new products, so that they can reach the end customer before competition. This paper adds to existing research on PD testing procedures by reporting on an exploratory investigation carried out in a UK-based manufacturing plant. We explore the organisational and managerial factors that contribute to the time spent on testing of new products during development. The investigation consisted of three sections, viz. observations and process modelling, utilisation metrics and a questionnaire-based investigation, from which a proposed framework to improve and reduce the PD time cycle is presented. This research focuses specifically on the improvement of the utilisation of product testing facilities and the links to its main internal stakeholders - PD engineers.
Resumo:
Large efforts are on-going within the EU to prepare the Marine Strategy Framework Directive’s (MSFD) assessment of the environmental status of the European seas. This assessment will only be as good as the indicators chosen to monitor the eleven descriptors of good environmental status (GEnS). An objective and transparent framework to determine whether chosen indicators actually support the aims of this policy is, however, not yet in place. Such frameworks are needed to ensure that the limited resources available to this assessment optimize the likelihood of achieving GEnS within collaborating states. Here, we developed a hypothesis-based protocol to evaluate whether candidate indicators meet quality criteria explicit to the MSFD, which the assessment community aspires to. Eight quality criteria are distilled from existing initiatives, and a testing and scoring protocol for each of them is presented. We exemplify its application in three worked examples, covering indicators for three GEnS descriptors (1, 5 and 6), various habitat components (seaweeds, seagrasses, benthic macrofauna and plankton), and assessment regions (Danish, Lithuanian and UK waters). We argue that this framework provides a necessary, transparent and standardized structure to support the comparison of candidate indicators, and the decision-making process leading to indicator selection. Its application could help identify potential limitations in currently available candidate metrics and, in such cases, help focus the development of more adequate indicators. Use of such standardized approaches will facilitate the sharing of knowledge gained across the MSFD parties despite context-specificity across assessment regions, and support the evidence-based management of European seas.
Resumo:
Large efforts are on-going within the EU to prepare the Marine Strategy Framework Directive’s (MSFD) assessment of the environmental status of the European seas. This assessment will only be as good as the indicators chosen to monitor the eleven descriptors of good environmental status (GEnS). An objective and transparent framework to determine whether chosen indicators actually support the aims of this policy is, however, not yet in place. Such frameworks are needed to ensure that the limited resources available to this assessment optimize the likelihood of achieving GEnS within collaborating states. Here, we developed a hypothesis-based protocol to evaluate whether candidate indicators meet quality criteria explicit to the MSFD, which the assessment community aspires to. Eight quality criteria are distilled from existing initiatives, and a testing and scoring protocol for each of them is presented. We exemplify its application in three worked examples, covering indicators for three GEnS descriptors (1, 5 and 6), various habitat components (seaweeds, seagrasses, benthic macrofauna and plankton), and assessment regions (Danish, Lithuanian and UK waters). We argue that this framework provides a necessary, transparent and standardized structure to support the comparison of candidate indicators, and the decision-making process leading to indicator selection. Its application could help identify potential limitations in currently available candidate metrics and, in such cases, help focus the development of more adequate indicators. Use of such standardized approaches will facilitate the sharing of knowledge gained across the MSFD parties despite context-specificity across assessment regions, and support the evidence-based management of European seas.
Resumo:
With the development of information technology, the theory and methodology of complex network has been introduced to the language research, which transforms the system of language in a complex networks composed of nodes and edges for the quantitative analysis about the language structure. The development of dependency grammar provides theoretical support for the construction of a treebank corpus, making possible a statistic analysis of complex networks. This paper introduces the theory and methodology of the complex network and builds dependency syntactic networks based on the treebank of speeches from the EEE-4 oral test. According to the analysis of the overall characteristics of the networks, including the number of edges, the number of the nodes, the average degree, the average path length, the network centrality and the degree distribution, it aims to find in the networks potential difference and similarity between various grades of speaking performance. Through clustering analysis, this research intends to prove the network parameters’ discriminating feature and provide potential reference for scoring speaking performance.
Resumo:
Executive functions (EF) such as self-monitoring, planning, and organizing are known to develop through childhood and adolescence. They are of potential importance for learning and school performance. Earlier research into the relation between EF and school performance did not provide clear results possibly because confounding factors such as educational track, boy-girl differences, and parental education were not taken into account. The present study therefore investigated the relation between executive function tests and school performance in a highly controlled sample of 173 healthy adolescents aged 12–18. Only students in the pre-university educational track were used and the performance of boys was compared to that of girls. Results showed that there was no relation between the report marks obtained and the performance on executive function tests, notably the Sorting Test and the Tower Test of the Delis-Kaplan Executive Functions System (D-KEFS). Likewise, no relation was found between the report marks and the scores on the Behavior Rating Inventory of Executive Function—Self-Report Version (BRIEF-SR) after these were controlled for grade, sex, and level of parental education. The findings indicate that executive functioning as measured with widely used instruments such as the BRIEF-SR does not predict school performance of adolescents in preuniversity education any better than a student's grade, sex, and level of parental education.
Resumo:
The ability to project oneself into the future to pre-experience an event is referred to as episodic future thinking (Atance & O’Neill, 2001). Only a relatively small number of studies have attempted to measure this ability in pre-school aged children (Atance & Meltzoff, 2005; Busby & Suddendorf, 2005ab, 2010; Russell, Alexis, & Clayton, 2010).Perhaps the most successful method is that used by Russell et al (2010). In this task, 3- to 5-year-olds played a game of blow football on one end of a table. After this children were asked to select tools that would enable them to play the same game tomorrow from the opposite, unreachable, side of the table. Results indicated that only 5-year-olds were capable of selecting the right objects for future use more often than would be expected by chance. Above-chance performance was observed in this older group even though most children failed the task because there was a low probability of selecting the correct 2 objects from a choice of 6 by chance.This study aimed to identify the age at which children begin to consistently pass this type of task. Three different tasks were designed in which children played a game on one side of a table, and then were asked to choose a tool to play a similar game on the other side of the table the next day. For example, children used a toy fishing rod to catch magnetic fish on one side of the table; playing the same game from the other side of the table required a different type of fishing rod. At test, children chose between just 2 objects: the tool they had already used, which would not work on the other side, and a different tool that they had not used before but which was suitable for the other side of the table. Experiment 1: Forty-eight 4-year-olds (M = 53.6 months, SD = 2.9) took part. These children were assigned to one of two conditions: a control condition (present-self) where the key test questions were asked in the present tense and an experimental condition (future-self) where the questions were in the future tense. Surprisingly, the results showed that both groups of 4-year-olds selected the correct tool at above chance levels (Table 1 shows the mean number of correct answers out of three). However, the children could see the apparatus when they answered the test questions and so perhaps answered them correctly without imagining the future. Experiment 2: Twenty-four 4-year-olds (M = 53.7, SD = 3.1) participated. Pre-schoolers in this study experienced one condition: future-self looking-away. In this condition children were asked to turn their backs to the games when answering the test questions, which were in the future tense. Children again performed above chance levels on all three games.Contrary to the findings of Russell et al. (2010), our results suggest that episodic future thinking skills could be present in 4-year-olds, assuming that this is what is measured by the tasks. Table 1. Mean number of correct answers across the three games in Experiments 1 and 2Experimental Conditions (N=24 in each condition)Mean CorrectStandardDeviationStatistical SignificanceExp. 1 (present-self, look) – 2 items2.750.68p < 0.001Exp. 1 (future-self, look) – 2 items 2.790.42p < 0.001Exp. 2 (future-self, away) – 2 items 2.330.64p < 0.001Exp. 3 (future-self away) – 3 items1.210.98p = 0.157
Resumo:
Many engineers currently in professional practice will have gained a degree level qualification which involved studying a curriculum heavy with mathematics and engineering science. While this knowledge is vital to the engineering design process so also is manufacturing knowledge, if the resulting designs are to be both technically and commercially viable.
The methodology advanced by the CDIO Initiative aims to improve engineering education by teaching in the context of Conceiving, Designing, Implementing and Operating products, processes or systems. A key element of this approach is the use of Design-Built-Test (DBT) projects as the core of an integrated curriculum. This approach facilitates the development of professional skills as well as the application of technical knowledge and skills developed in other parts of the degree programme. This approach also changes the role of lecturer to that of facilitator / coach in an active learning environment in which students gain concrete experiences that support their development.
The case study herein describes Mechanical Engineering undergraduate student involvement in the manufacture and assembly of concept and functional prototypes of a folding bicycle.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.