980 resultados para Residual-based tests


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Intracochlear trauma from surgical insertion of bulky electrode arrays and inadequate pitch perception are areas of concern with current hand-assembled commercial cochlear implants. Parylene thin-film arrays with higher electrode densities and lower profiles are a potential solution, but lack rigidity and hence depend on manually fabricated permanently attached polyethylene terephthalate (PET) tubing based bulky backing devices. As a solution, we investigated a new backing device with two sub-systems. The first sub-system is a thin poly(lactic acid) (PLA) stiffener that will be embedded in the parylene array. The second sub-system is an attaching and detaching mechanism, utilizing a poly(N-vinylpyrrolidone)-block-poly(d,l-lactide) (PVP-b-PDLLA) copolymer-based biodegradable and water soluble adhesive, that will help to retract the PET insertion tool after implantation. As a proof-of-concept of sub-system one, a microfabrication process for patterning PLA stiffeners embedded in parylene has been developed. Conventional hotembossing, mechanical micromachining, and standard cleanroom processes were integrated for patterning fully released and discrete stiffeners coated with parylene. The released embedded stiffeners were thermoformed to demonstrate that imparting perimodiolar shapes to stiffener-embedded arrays will be possible. The developed process when integrated with the array fabrication process will allow fabrication of stiffener-embedded arrays in a single process. As a proof-of-concept of sub-system two, the feasibility of the attaching and detaching mechanism was demonstrated by adhering 1x and 1.5x scale PET tube-based insertion tools and PLA stiffeners embedded in parylene using the copolymer adhesive. The attached devices survived qualitative adhesion tests, thermoforming, and flexing. The viability of the detaching mechanism was tested by aging the assemblies in-vitro in phosphate buffer solution. The average detachment times, 2.6 minutes and 10 minutes for 1x and 1.5x scale devices respectively, were found to be clinically relevant with respect to the reported array insertion times during surgical implantation. Eventually, the stiffener-embedded arrays would not need to be permanently attached to current insertion tools which are left behind after implantation and congest the cochlear scala tympani chamber. Finally, a simulation-based approach for accelerated failure analysis of PLA stiffeners and characterization of PVP-b-PDLLA copolymer adhesive has been explored. The residual functional life of embedded PLA stiffeners exposed to body-fluid and thereby subjected to degradation and erosion has been estimated by simulating PLA stiffeners with different parylene coating failure types and different PLA types for a given parylene coating failure type. For characterizing the PVP-b-PDLLA copolymer adhesive, several formulations of the copolymer adhesive were simulated and compared based on the insertion tool detachment times that were predicted from the dissolution, degradation, and erosion behavior of the simulated adhesive formulations. Results indicate that the simulation-based approaches could be used to reduce the total number of time consuming and expensive in-vitro tests that must be conducted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Altough nowadays DMTA is one of the most used techniques to characterize polymers thermo-mechanical behaviour, it is only effective for small amplitude oscillatory tests and limited to a single frequency analysis (linear regime). In this thesis work a Fourier transform based experimental system has proven to give hint on structural and chemical changes in specimens during large amplitude oscillatory tests exploiting multi frequency spectral analysis turning out in a more sensitive tool than classical linear approach. The test campaign has been focused on three test typologies: Strain sweep tests, Damage investigation and temperature sweep tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Principal Topic: It is well known that most new ventures suffer from a significant lack of resources, which increases the risk of failure (Shepherd, Douglas and Shanley, 2000) and makes it difficult to attract stakeholders and financing for the venture (Bhide & Stevenson, 1999). The Resource-Based View (RBV) (Barney, 1991; Wernerfelt, 1984) is a dominant theoretical base increasingly drawn on within Strategic Management. While theoretical contributions applying RBV in the domain of entrepreneurship can arguably be traced back to Penrose (1959), there has been renewed attention recently (e.g. Alvarez & Busenitz, 2001; Alvarez & Barney, 2004). This said, empirical work is in its infancy. In part, this may be due to a lack of well developed measuring instruments for testing ideas derived from RBV. The purpose of this study is to develop a measurement scales that can serve to assist such empirical investigations. In so doing we will try to overcome three deficiencies in current empirical measures used for the application of RBV to the entrepreneurship arena. First, measures for resource characteristics and configurations associated with typical competitive advantages found in entrepreneurial firms need to be developed. These include such things as alertness and industry knowledge (Kirzner, 1973), flexibility (Ebben & Johnson, 2005), strong networks (Lee et al., 2001) and within knowledge intensive contexts, unique technical expertise (Wiklund and Shepard, 2003). Second, the RBV has the important limitations of being relatively static and modelled on large, established firms. In that context, traditional RBV focuses on competitive advantages. However, newly established firms often face disadvantages, especially those associated with the liabilities of newness (Aldrich & Auster, 1986). It is therefore important in entrepreneurial contexts to expand to an investigation of responses to competitive disadvantage through an RBV lens. Conversely, recent research has suggested that resource constraints actually have a positive effect on firm growth and performance under some circumstances (e.g., George, 2005; Katila & Shane, 2005; Mishina et al., 2004; Mosakowski, 2002; cf. also Baker & Nelson, 2005). Third, current empirical applications of RBV measured levels or amounts of particular resources available to a firm. They infer that these resources deliver firms competitive advantage by establishing a relationship between these resource levels and performance (e.g. via regression on profitability). However, there is the opportunity to directly measure the characteristics of resource configurations that deliver competitive advantage, such as Barney´s well known VRIO (Valuable, Rare, Inimitable and Organized) framework (Barney, 1997). Key Propositions and Methods: The aim of our study is to develop and test scales for measuring resource advantages (and disadvantages) and inimitability for entrepreneurial firms. The study proceeds in three stages. The first stage developed our initial scales based on earlier literature. Where possible, we adapt scales based on previous work. The first block of the scales related to the level of resource advantages and disadvantages. Respondents were asked the degree to which each resource category represented an advantage or disadvantage relative to other businesses in their industry on a 5 point response scale: Major Disadvantage, Slight Disadvantage, No Advantage or Disadvantage, Slight Advantage and Major Advantage. Items were developed as follows. Network capabilities (3 items) were adapted from (Madsen, Alsos, Borch, Ljunggren & Brastad, 2006). Knowledge resources marketing expertise / customer service (3 items) and technical expertise (3 items) were adapted from Wiklund and Shepard (2003). flexibility (2 items), costs (4 items) were adapted from JIBS B97. New scales were developed for industry knowledge / alertness (3 items) and product / service advantages. The second block asked the respondent to nominate the most important resource advantage (and disadvantage) of the firm. For the advantage, they were then asked four questions to determine how easy it would be for other firms to imitate and/or substitute this resource on a 5 point likert scale. For the disadvantage, they were asked corresponding questions related to overcoming this disadvantage. The second stage involved two pre-tests of the instrument to refine the scales. The first was an on-line convenience sample of 38 respondents. The second pre-test was a telephone interview with a random sample of 31 Nascent firms and 47 Young firms (< 3 years in operation) generated using a PSED method of randomly calling households (Gartner et al. 2004). Several items were dropped or reworded based on the pre-tests. The third stage (currently in progress) is part of Wave 1 of CAUSEE (Nascent Firms) and FEDP (Young Firms), a PSED type study being conducted in Australia. The scales will be tested and analysed with a random sample of approximately 700 Nascent and Young firms respectively. In addition, a judgement sample of approximately 100 high potential businesses in each category will be included. Findings and Implications: The paper will report the results of the main study (stage 3 – currently data collection is in progress) will allow comparison of the level of resource advantage / disadvantage across various sub-groups of the population. Of particular interest will be a comparison of the high potential firms with the random sample. Based on the smaller pre-tests (N=38 and N=78) the factor structure of the items confirmed the distinctiveness of the constructs. The reliabilities are within an acceptable range: Cronbach alpha ranged from 0.701 to 0.927. The study will provide an opportunity for researchers to better operationalize RBV theory in studies within the domain of entrepreneurship. This is a fundamental requirement for the ability to test hypotheses derived from RBV in systematic, large scale research studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effective management of bridge stock involves making decisions as to when to repair, remedy, or do nothing, taking into account the financial and service life implications. Such decisions require a reliable diagnosis as to the cause of distress and an understanding of the likely future degradation. Such diagnoses are based on a combination of visual inspections, laboratory tests on samples and expert opinions. In addition, the choice of appropriate laboratory tests requires an understanding of the degradation mechanisms involved. Under these circumstances, the use of expert systems or evaluation tools developed from “realtime” case studies provides a promising solution in the absence of expert knowledge. This paper addresses the issues in bridge infrastructure management in Queensland, Australia. Bridges affected by alkali silica reaction and chloride induced corrosion have been investigated and the results presented using a mind mapping tool. The analysis highights that several levels of rules are required to assess the mechanism causing distress. The systematic development of a rule based approach is presented. An example of this application to a case study bridge has been used to demonstrate that preliminary results are satisfactory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reinforced concrete structures are susceptible to a variety of deterioration mechanisms due to creep and shrinkage, alkali-silica reaction (ASR), carbonation, and corrosion of the reinforcement. The deterioration problems can affect the integrity and load carrying capacity of the structure. Substantial research has been dedicated to these various mechanisms aiming to identify the causes, reactions, accelerants, retardants and consequences. This has improved our understanding of the long-term behaviour of reinforced concrete structures. However, the strengthening of reinforced concrete structures for durability has to date been mainly undertaken after expert assessment of field data followed by the development of a scheme to both terminate continuing degradation, by separating the structure from the environment, and strengthening the structure. The process does not include any significant consideration of the residual load-bearing capacity of the structure and the highly variable nature of estimates of such remaining capacity. Development of performance curves for deteriorating bridge structures has not been attempted due to the difficulty in developing a model when the input parameters have an extremely large variability. This paper presents a framework developed for an asset management system which assesses residual capacity and identifies the most appropriate rehabilitation method for a given reinforced concrete structure exposed to aggressive environments. In developing the framework, several industry consultation sessions have been conducted to identify input data required, research methodology and output knowledge base. Capturing expert opinion in a useable knowledge base requires development of a rule based formulation, which can subsequently be used to model the reliability of the performance curve of a reinforced concrete structure exposed to a given environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key issues facing public asset owners is the decision of refurbishing aged built assets. This decision requires an assessment of the “remaining service life” of the key components in a building. The remaining service life is significantly dependent upon the existing condition of the asset and future degradation patterns considering durability and functional obsolescence. Recently developed methods on Residual Service Life modelling, require sophisticated data that are not readily available. Most of the data available are in the form of reports prior to undertaking major repairs or in the form of sessional audit reports. Valuable information from these available sources can serve as bench marks for estimating the reference service life. The authors have acquired similar informations from a public asset building in Melbourne. Using these informations, the residual service life of a case study building façade has been estimated in this paper based on state-of-the-art approaches. These estimations have been evaluated against expert opinion. Though the results are encouraging it is clear that the state-of-the-art methodologies can only provide meaningful estimates provided the level and quality of data are available. This investigation resulted in the development of a new framework for maintenance that integrates the condition assessment procedures and factors influencing residual service life

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. We tested predictions from the elaborated intrusion (EI) theory of desire, which distinguishes intrusive thoughts and elaborations, and emphasizes the importance of imagery. Secondarily, we undertook preliminary evaluations of the Alcohol Craving Experience (ACE) questionnaire, a new measure based on EI Theory. Methods. Participants (N ¼ 232) were in correspondence-based treatment trials for alcohol abuse or dependence. The study used retrospective reports obtained early in treatment using the ACE, and daily self-monitoring of urges, craving, mood and alcohol consumption. Results. The ACE displayed high internal consistency and test – retest reliability and sound relationships with self-monitored craving, and was related to Baseline alcohol dependence, but not to consumption. Imagery during craving was experienced by 81%,with 2.3 senses involved on average. More frequent imagery was associated with longer episode durations and stronger craving. Transient intrusive thoughts were reported by 87% of respondents, and were more common if they frequently attempted to stop alcohol cognitions. Associations between average daily craving and weekly consumption were seen. Depression and negative mood were associated with more frequent, stronger and longer lasting desires for alcohol. Conclusions. Results supported the distinction of automatic and controlled processes in craving, together with the importance of craving imagery. They were also consistent with prediction of consumption from cross-situational averages of craving, and with positive associations between craving and negative mood. However, this study’s retrospective reporting and correlational design require that its results be interpreted cautiously. Research using ecological momentary measures and laboratory manipulations is needed before confident inferences about causality can be made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent times, light gauge cold-formed steel sections have been used extensively since they have a very high strength to weight ratio compared with thicker hot-rolled steel sections. However, they are susceptible to various buckling modes including a distortional mode and hence show complex behaviour under fire conditions. Therefore a research project based on detailed experimental studies was undertaken to investigate the distortional buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. More than 150 axial compression tests were undertaken at uniform ambient and elevated temperatures. Two types of cross sections were selected with nominal thicknesses of 0.60, 0.80, and 0.95 mm. Both low (G250) and high (G550) strength steels were used. Distortional buckling tests were conducted at six different temperatures in the range of 20 to 800°C. The ultimate loads of compression members subject to distortional buckling were then used to review the adequacy of the current design rules at ambient and elevated temperatures. This paper presents the details of this experimental study and the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The researcher’s professional role as an Education Officer was the impetus for this study. Designing and implementing professional development activities is a significant component of the researcher’s position description and as a result of reflection and feedback from participants and colleagues, the creation of a more effective model of professional development became the focus for this study. Few studies have examined all three links between the purposes of professional development that is, increasing teacher knowledge, improving teacher practice, and improving student outcomes. This study is significant in that it investigates the nature of the growth of teachers who participated in a model of professional development which was based upon the principles of Lesson Study. The research provides qualitative and empirical data to establish some links between teacher knowledge, teacher practice, and student learning outcomes. Teacher knowledge in this study refers to mathematics content knowledge as well as pedagogical-content knowledge. The outcomes for students include achievement outcomes, attitudinal outcomes, and behavioural outcomes. As the study was conducted at one school-site, existence proof research was the focus of the methodology and data collection. Developing over the 2007 school year, with five teacher-participants and approximately 160 students from Year Levels 6 to 9, the Lesson Study-principled model of professional development provided the teacher-participants with on-site, on-going, and reflective learning based on their classroom environment. The focus area for the professional development was strategising the engagement with and solution of worded mathematics problems. A design experiment was used to develop the professional development as an intervention of prevailing teacher practice for which data were collected prior to and after the period of intervention. A model of teacher change was developed as an underpinning framework for the development of the study, and was useful in making decisions about data collection and analyses. Data sources consisted of questionnaires, pre-tests and post-tests, interviews, and researcher observations and field notes. The data clearly showed that: content knowledge and pedagogical-content knowledge were increased among the teacher-participants; teacher practice changed in a positive manner; and that a majority of students demonstrated improved learning outcomes. The positive changes to teacher practice are described in this study as the demonstrated use of mixed pedagogical practices rather than a polarisation to either traditional pedagogical practices or contemporary pedagogical practices. The improvement in student learning outcomes was most significant as improved achievement outcomes as indicated by the comparison of pre-test and post-test scores. The effectiveness of the Lesson Study-principled model of professional development used in this study was evaluated using Guskey’s (2005) Five Levels of Professional Development Evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine whether there are clinical and public health dilemmas resulting from the reproducibility of routine vitamin D assays. Methods: Blinded agreement studies were conducted in eight clinical laboratories using two commonly used assays to measure serum 25-hydroxyvitamin D (25(OH)D) levels in Australasia and Canada (DiaSorin Radioimmunoassay (RIA) and DiaSorin LIAISON® one). Results: Only one laboratory measured 25(OH)D with excellent precision. Replicate 25(OH)D measurements varied by up to 97% and 15% of paired results differed by more than 50%. Thirteen percent of subjects received one result indicating insufficiency [25-50 nmol/l] and another suggesting adequacy [>50 nmol/l]). Agreement ranged from poor to excellent for laboratories using the manual RIA, while the precision of the semi-automated Liaison® system was consistently poor. Conclusions: Recent interest in the relevance of vitamin D to human health has increased demand for 25(OH)D testing and associated costs. Our results suggest clinicians and public health authorities are making decisions about treatment or changes to public health policy based on imprecise data. Clinicians, researchers and policy makers should be made aware of the imprecision of current 25(OH)D testing so that they exercise caution when using these assays for clinical practice, and when interpreting the findings of epidemiological studies based on vitamin D levels measured using these assays. Development of a rapid, reproducible, accurate and robust assay should be a priority due to interest in populationbased screening programs and research to inform public health policy about the amount of sun exposure required for human health. In the interim, 25(OH)D results should routinely include a statement of measurement uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical high strength steels (HSS) have exceptional high strengths with improved weldability making the material attractive in modern steel constructions. However, due to lack of understanding, most of the current steel design standards are limited to conventional low strength steels (LSS, i.e. fy ≤ 450 MPa). This paper presents the details of full-scale experimental tests on short beams fabricated from BISPLATE80 HSS materials (nominal fy = 690 MPa). The various slenderness ratios of the plate elements in the test specimens were chosen in the range near the current yield limit (AS4100-1998, etc.). The experimental studies presented in this paper have produced a better understanding of the structural behaviour of HSS members subjected to local instabilities. Comparisons have also presented in the paper regarding to the design predictions from the current steel standards (AS4100-1998). This study has enabled to provide a series of proposals for proper assessment of plate slenderness limits for structural members made of representative HSS materials. This research work also enables the inclusion of further versions in the steel design specifications for typical HSS materials to be used in buildings and bridges. This paper also presents a distribution model of residual stresses in the longitudinal direction for typical HSS I-sections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cold-formed steel members can be assembled in various combinations to provide cost-efficient and safe light gauge floor systems for buildings. Such Light gauge Steel Framing (LSF) systems are widely accepted in industrial and commercial building construction. An example application is in floor-ceiling systems. Light gauge steel floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire-rated floor-ceiling assemblies formed with new materials and construction methodologies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite floor-ceiling system has been developed to provide higher fire rating under standard fire conditions. But its increased fire rating could not be determined using the currently available design methods. Therefore a research project was carried out to investigate its structural and fire resistance behaviour under standard fire conditions. In this research project full scale experimental tests of the new LSF floor system based on a composite ceiling unit were undertaken using a gas furnace at the Queensland University of Technology. Both the conventional and the new steel floor-ceiling systems were tested under structural and fire loads. Full scale fire tests provided a good understanding of the fire behaviour of the LSF floor-ceiling systems and confirmed the superior performance of the new composite system. This paper presents the details of this research into the structural and fire behaviour of light gauge steel floor systems protected by the new composite panel, and the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved positionbased visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.