953 resultados para test-process features
Resumo:
In this thesis, the components important for testing work and organisational test process are identified and analysed. This work focuses on the testing activities in reallife software organisations, identifying the important test process components, observing testing work in practice, and analysing how the organisational test process could be developed. Software professionals from 14 different software organisations were interviewed to collect data on organisational test process and testing‐related factors. Moreover, additional data on organisational aspects was collected with a survey conducted on 31 organisations. This data was further analysed with the Grounded Theory method to identify the important test process components, and to observe how real‐life test organisations develop their testing activities. The results indicate that the test management at the project level is an important factor; the organisations do have sufficient test resources available, but they are not necessarily applied efficiently. In addition, organisations in general are reactive; they develop their process mainly to correct problems, not to enhance their efficiency or output quality. The results of this study allows organisations to have a better understanding of the test processes, and develop towards better practices and a culture of preventing problems, not reacting to them.
Resumo:
The capacitor test process at ABB Capacitors in Ludvika must be improved to meet future demands for high voltage products. To find a solution to how to improve the test process, an investigation was performed to establish which parts of the process are used and how they operate. Several parts which can improves the process were identified. One of them was selected to be improved in correlation with the subject, mechanical engineering. Four concepts were generated and decision matrixes were used to systematically select the best concept. By improving the process several benefits has been added to the process. More units are able to be tested and lead time is reduced. As the lead time is reduced the cost for each unit is reduced, workers will work less hours for the same amount of tested units, future work to further improve the process is also identified. The selected concept was concept 1, the sway stop concept. This concept is used to reduce the sway of the capacitors as they have entered the test facility, the box. By improving this part of the test process a time saving of 20 seconds per unit can be achieved, equivalent to 7% time reduction. This can be compared to an additional 1400 units each year.
Resumo:
Growth of complexity and functional importance of integrated navigation systems (INS) leads to high losses at the equipment refusals. The paper is devoted to the INS diagnosis system development, allowing identifying the cause of malfunction. The proposed solutions permit taking into account any changes in sensors dynamic and accuracy characteristics by means of the appropriate error models coefficients. Under actual conditions of INS operation, the determination of current values of the sensor models and estimation filter parameters rely on identification procedures. The results of full-scale experiments are given, which corroborate the expediency of INS error models parametric identification in bench test process.
Resumo:
BACKGROUND: Pneumonia is the biggest cause of deaths in young children in developing countries, but early diagnosis and intervention can effectively reduce mortality. We aimed to assess the diagnostic value of clinical signs and symptoms to identify radiological pneumonia in children younger than 5 years and to review the accuracy of WHO criteria for diagnosis of clinical pneumonia. METHODS: We searched Medline (PubMed), Embase (Ovid), the Cochrane Database of Systematic Reviews, and reference lists of relevant studies, without date restrictions, to identify articles assessing clinical predictors of radiological pneumonia in children. Selection was based on: design (diagnostic accuracy studies), target disease (pneumonia), participants (children aged <5 years), setting (ambulatory or hospital care), index test (clinical features), and reference standard (chest radiography). Quality assessment was based on the 2011 Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria. For each index test, we calculated sensitivity and specificity and, when the tests were assessed in four or more studies, calculated pooled estimates with use of bivariate model and hierarchical summary receiver operation characteristics plots for meta-analysis. FINDINGS: We included 18 articles in our analysis. WHO-approved signs age-related fast breathing (six studies; pooled sensitivity 0·62, 95% CI 0·26-0·89; specificity 0·59, 0·29-0·84) and lower chest wall indrawing (four studies; 0·48, 0·16-0·82; 0·72, 0·47-0·89) showed poor diagnostic performance in the meta-analysis. Features with the highest pooled positive likelihood ratios were respiratory rate higher than 50 breaths per min (1·90, 1·45-2·48), grunting (1·78, 1·10-2·88), chest indrawing (1·76, 0·86-3·58), and nasal flaring (1·75, 1·20-2·56). Features with the lowest pooled negative likelihood ratio were cough (0·30, 0·09-0·96), history of fever (0·53, 0·41-0·69), and respiratory rate higher than 40 breaths per min (0·43, 0·23-0·83). INTERPRETATION: Not one clinical feature was sufficient to diagnose pneumonia definitively. Combination of clinical features in a decision tree might improve diagnostic performance, but the addition of new point-of-care tests for diagnosis of bacterial pneumonia would help to attain an acceptable level of accuracy. FUNDING: Swiss National Science Foundation.
Resumo:
The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.
Resumo:
Introduction: Mindfulness based cognitive therapy for depression (MBCT) has shown to be effective for the reduction of depressive relapse. However, additional information regarding baseline patient characteristics and process features related to positive response could be helpful both for the provision of MBCT in clinical practice, as well as for its further development. Method: Baseline characteristics, process data, and immediate outcome (symptom change, change in attitudes and trait mindfulness) of 108 patients receiving MBCT in routine care were recorded. A newly developed self-report measure (Daily Mindfulness Scale, DMS) was applied daily during the MBCT program. Additionally, patients filed daily reports on their mindfulness practice. There was no control group available. Results: Patients with more severe initial symptoms indicated greater amounts of symptom improvement, but did not show great rates of dropout from the MBCT intervention. Younger age was related to higher rates of dropout. Contradictory to some previous data, patients with lower levels of initial trait mindfulness showed greater improvement in symptoms, even after controlling for initial levels of symptoms. Adherence to daily mindfulness practice was high. Consistent with this result, the duration of daily mindfulness practice was not related to immediate outcome. Process studies using multivariate time series analysis revealed a specific role of daily mindfulness in reducing subsequent negative mood. Conclusions: Within the range of patient present in this study and the given study design, results support the use of MBCT in more heterogeneous groups. This demanding intervention was well tolerated by patients with higher levels of symptoms, and resulted in significant improvements regarding residual symptoms. Process-outcome analyses of initial trait mindfulness and daily mindfulness both support the crucial role of changes in mindfulness for the effects of MBCT.
Resumo:
Synthetic Aperture Radar’s (SAR) are systems designed in the early 50’s that are capable of obtaining images of the ground using electromagnetic signals. Thus, its activity is not interrupted by adverse meteorological conditions or during the night, as it occurs in optical systems. The name of the system comes from the creation of a synthetic aperture, larger than the real one, by moving the platform that carries the radar (typically a plane or a satellite). It provides the same resolution as a static radar equipped with a larger antenna. As it moves, the radar keeps emitting pulses every 1/PRF seconds —the PRF is the pulse repetition frequency—, whose echoes are stored and processed to obtain the image of the ground. To carry out this process, the algorithm needs to make the assumption that the targets in the illuminated scene are not moving. If that is the case, the algorithm is able to extract a focused image from the signal. However, if the targets are moving, they get unfocused and/or shifted from their position in the final image. There are applications in which it is especially useful to have information about moving targets (military, rescue tasks,studyoftheflowsofwater,surveillanceofmaritimeroutes...).Thisfeatureiscalled Ground Moving Target Indicator (GMTI). That is why the study and the development of techniques capable of detecting these targets and placing them correctly in the scene is convenient. In this document, some of the principal GMTI algorithms used in SAR systems are detailed. A simulator has been created to test the features of each implemented algorithm on a general situation with moving targets. Finally Monte Carlo tests have been performed, allowing us to extract conclusions and statistics of each algorithm.
Resumo:
This research investigated the pattern of antibody response by means of enzyme linked immunosorbent assay (Elisa) and indirect fluorescent antibody test (IFAT) through the course of experimental Trypanosoma evansi infection in dogs. Clinical and parasitological features were also studied. The average prepatent period was 11.2 days and parasitaemia showed an undulating course. Biometrical study of parasites revealed a mean total length of 21.68mm. The disease was characterized by intermittent fever closely related to the degree of parasitaemia and main clinical signs consisted of pallor of mucous membrane, edema, progressive emaciation and enlargement of palpable lymph nodes. Diagnostic antibody was detected within 12 to 15 days and 15 to 19 days of infection by IFAT and Elisa, respectively. High and persistent antibody levels were detected by both tests and appeared not to correlate with control of parasitaemia
Resumo:
The current context of a strong competition and the ongoing search for competitive advantages requires more than processes modernization, technological and financial resources. It requires a competent workforce, strongly committed and engaged with Organization’s challenges. Under this scenario, it seems crucial to synchronize their performance with Organization’s strategy, aimed at pursuing its effective achievement. If well used, the Performance Evaluation as a strategy for Human Resource Management presents itself as an instrument to foster high levels of performance. A more recent approach of this policy refers to Performance Management representing a dynamic and participative evaluation system, which combines the development of consensual goals, support and follow-up for further execution of respective assessment. This research was based on the ENAPOR, S.A (Porto da Praia) case, with the intention of checking the alignment of its Performance Evaluation System with the Company's strategic goals and what the process features.
Resumo:
Sähkömarkkinoiden vapautumisen jälkeen energia-alalle on muodostunut entistä suurempi kysyntä kehittyneille energiatiedon hallintaaan erikoistuneille tietojärjestelmille. Uudet lakisäädökset sekä tulevaisuuden kokonaisvaltaiset tiedonkeruujärjestelmät, kuten älykkäät mittarit ja älykkäät sähköverkot, tuovat mukanaan entistä suuremman prosessoitavan tietovirran. Nykyaikaisen energiatietojärjestelmän on kyettävä vastaamaan haasteeseen ja palveltava asiakkaan vaatimuksia tehokkaasti prosessien suorituskyvyn kärsimättä. Tietojärjestelmän prosessien on oltava myös skaalautuvia, jotta tulevaisuuden lisääntyneet prosessointitarpeet ovat hallittavissa. Tässä työssä kuvataan nykyaikaisen energiatietojärjestelmän keskeiset energiatiedon hallintaan ja varastointiin liittyvät komponentit. Työssä esitellään myös älykkäiden mittareiden perusperiaate ja niiden tuomat edut energia-alalla. Lisäksi työssä kuvataan visioita tulevaisuuden älykkäiden sähköverkkojen toteutusmahdollisuuksista. Diplomityössä esitellään keskeisiä suorituskykyyn liittyviä kokonaisuuksia. Lisäksi työssä kuvataan keskeiset suorituskyvyn mittarit sekä suorituskykyvaatimukset. Järjestelmän suorituskyvyn arvioinnin toteuttamiseen on erilaisia menetelmiä, joista tässä työssä kuvataan yksi sen keskeisine periaatteineen. Suorituskyvyn analysointiin käytetään erilaisia tekniikoita, joista tässä diplomityössä esitellään tarkemmin järjestelmän mittaus. Työssä toteutetaan myös case-tutkimus, jossa analysoidaan mittaustiedon sisääntuontiin käytettävän prosessin kahta eri kehitysversiota ja näiden suorituskykyominaisuuksia. Kehitysversioiden vertailussa havaitaan, että uusi versio on selkeästi edellistä versiota nopeampi. Case-tutkimuksessa määritetään myös suorituskyvyn kannalta optimaalinen rinnakkaisprosessien määrä ja tutkitaan prosessin skaalautuvuutta. Tutkimuksessa todetaan, että uusi kehitysversio skaalautuu lineaarisesti.
Resumo:
Four experiments consider some of the circumstances under which children follow two different rule pairs when sorting cards. Previous research has repeatedly found that 3-year-olds encounter substantial difficulties implementing the second of two conflicting rule sets, despite their knowledge of these rules. One interpretation of this phenomenon [Cognitive Complexity and Control (CCC) theory] is that 3-year-olds have problems establishing an appropriate hierarchical ordering for rules. The present data suggest an alternative account of children's card sorting behaviour, according to which the cognitive salience of test card features may be more important than inflexibility with respect to rule representation.
Resumo:
Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.
Resumo:
Today there are many system development projects that break both budget and time plan. Often this depends on defects in the information systems that could have been prevented. The cost of test can in some cases be as high as 50 % of the projects total cost and it's at the same time an important part of development. Test as such has moved its focus from the software it self and its faults to a wider perspective on whole infrastructures of information systems where assure a good quality is important. Sogeti in the Netherlands have developed a test method called TMap (Test Management approach) that can be used for structured testing of information systems. TMap haven't been used as much as desired in the office in Borlänge. Because Microsoft is releasing a new version of their platform Visual Studio Team System (VSTS 2010) some colleges at Sogeti in the Netherlands are about to develop a template that can support the use of TMap in VSTS 2010. When we write this the template is still in development. The goal for Sogeti was to find out the differences between the test functionality in VSTS 2008 and 2010. By using the purpose with this essay, which was to analyze the test process in VSTS 2008 with TMap against the test process in VSTS 2010 together with the template we got much help to achieve the goal. The analysis was done with four different aspects: The TPI and TMMi models, problem and strength analyses and a few question formulations. The TPI and TMMi models where used to analyses and evaluate the test process. The analysis showed that there were differences between the both test processes. VSTS 2010 together with the template gave a better support to use TMap and perform test. In VSTS 2010 the test tool Camano is connected to TFS and the tool is also to make the execution and logging of tests easier. This leads to a test process that is easier to handle and has a better support for TMap.
Resumo:
INTRODUCTION With the advent of Web 2.0, social networking websites like Facebook, MySpace and LinkedIn have become hugely popular. According to (Nilsen, 2009), social networking websites have global1 figures of almost 250 millions unique users among the top five2, with the time people spend on those networks increasing 63% between 2007 and 2008. Facebook alone saw a massive growth of 566% in number of minutes in the same period of time. Furthermore their appeal is clear, they enable users to easily form persistent networks of friends with whom they can interact and share content. Users then use those networks to keep in touch with their current friends and to reconnect with old friends. However, online social network services have rapidly evolved into highly complex systems which contain a large amount of personally salient information derived from large networks of friends. Since that information varies from simple links to music, photos and videos, users not only have to deal with the huge amount of data generated by them and their friends but also with the fact that it‟s composed of many different media forms. Users are presented with increasing challenges, especially as the number of friends on Facebook rises. An example of a problem is when a user performs a simple task like finding a specific friend in a group of 100 or more friends. In that case he would most likely have to go through several pages and make several clicks till he finds the one he is looking for. Another example is a user with more than 100 friends in which his friends make a status update or another action per day, resulting in 10 updates per hour to keep up. That is plausible, especially since the change in direction of Facebook to rival with Twitter, by encouraging users to update their status as they do on Twitter. As a result, to better present the web of information connected to a user the use of better visualizations is essential. The visualizations used nowadays on social networking sites haven‟t gone through major changes during their lifetimes. They have added more functionality and gave more tools to their users, but still the core of their visualization hasn‟t changed. The information is still presented in a flat way in lists/groups of text and images which can‟t show the extra connections pieces of information. Those extra connections can give new meaning and insights to the user, allowing him to more easily see if that content is important to him and the information related to it. However showing extra connections of information but still allowing the user to easily navigate through it and get the needed information with a quick glance is difficult. The use of color coding, clusters and shapes becomes then essential to attain that objective. But taking into consideration the advances in computer hardware in the last decade and the software platforms available today, there is the opportunity to take advantage of 3D. That opportunity comes in because we are at a phase were the hardware and the software available is ready for the use of 3D in the web. With the use of the extra dimension brought by 3D, visualizations can be constructed to show the content and its related information to the user at the same screen and in a clear way. Also it would allow a great deal of interactivity. Another opportunity to create better information‟s visualization presents itself in the form of the open APIs, specifically the ones made available by the social networking sites. Those APIs allow any developers to create their own applications or sites taking advantage of the huge amount of information there is on those networks. Specifically to this case, they open the door for the creation of new social network visualizations. Nevertheless, the third dimension is by itself not enough to create a better interface for a social networking website, there are some challenges to overcome. One of those challenges is to make the user understand what the system is doing during the interaction with the user. Even though that is important in 2D visualizations, it becomes essential in 3D due to the extra dimension. To overcome that challenge it‟s necessary the use of the principles of animations defined by the artists at Walt Disney Studios (Johnston, et al., 1995). By applying those principles in the development of the interface, the actions of the system in response to the user inputs became clear and understandable. Furthermore, a user study needs to be performed so the users‟ main goals and motivations, while navigating the social network, are revealed. Their goals and motivations are important in the construction of an interface that reflects the user expectations for the interface, but also helps in the development of appropriate metaphors. Those metaphors have an important role in the interface, because if correctly chosen they help the user understand the elements of the interface instead of making him memorize it. The last challenge is the use of 3D visualization on the web, since there have been several attempts to bring 3D into it, mainly with the various versions of VRML which were destined to failure due to the hardware limitations at the time. However, in the last couple of years there has been a movement to make the necessary tools to finally allow developers to use 3D in a useful way, using X3D or OpenGL but especially flash. This thesis argues that there is a need for a better social network visualization that shows all the dimensions of the information connected to the user and that allows him to move through it. But there are several characteristics the new visualization has to possess in order for it to present a real gain in usability to Facebook‟s users. The first quality is to have the friends at the core of its design, and the second to make use of the metaphor of circles of friends to separate users in groups taking into consideration the order of friendship. To achieve that several methods have to be used, from the use of 3D to get an extra dimension for presenting relevant information, to the use of direct manipulation to make the interface comprehensible, predictable and controllable. Moreover animation has to be use to make all the action on the screen perceptible to the user. Additionally, with the opportunity given by the 3D enabled hardware, the flash platform, through the use of the flash engine Papervision3D and the Facebook platform, all is in place to make the visualization possible. But even though it‟s all in place, there are challenges to overcome like making the system actions in 3D understandable to the user and creating correct metaphors that would allow the user to understand the information and options available to him. This thesis document is divided in six chapters, with Chapter 2 reviewing the literature relevant to the work described in this thesis. In Chapter 3 the design stage that resulted in the application presented in this thesis is described. In Chapter 4, the development stage, describing the architecture and the components that compose the application. In Chapter 5 the usability test process is explained and the results obtained through it are presented and analyzed. To finish, Chapter 6 presents the conclusions that were arrived in this thesis.