868 resultados para Statistic validation
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
The objective of this study was to optimize and validate the solid-liquid extraction (ESL) technique for determination of picloram residues in soil samples. At the optimization stage, the optimal conditions for extraction of soil samples were determined using univariate analysis. Ratio soil/solution extraction, type and time of agitation, ionic strength and pH of extraction solution were evaluated. Based on the optimized parameters, the following method of extraction and analysis of picloram was developed: weigh 2.00 g of soil dried and sieved through a sieve mesh of 2.0 mm pore, add 20.0 mL of KCl concentration of 0.5 mol L-1, shake the bottle in the vortex for 10 seconds to form suspension and adjust to pH 7.00, with alkaline KOH 0.1 mol L-1. Homogenate the system in a shaker system for 60 minutes and then let it stand for 10 minutes. The bottles are centrifuged for 10 minutes at 3,500 rpm. After the settlement of the soil particles and cleaning of the supernatant extract, an aliquot is withdrawn and analyzed by high performance liquid chromatography. The optimized method was validated by determining the selectivity, linearity, detection and quantification limits, precision and accuracy. The ESL methodology was efficient for analysis of residues of the pesticides studied, with percentages of recovery above 90%. The limits of detection and quantification were 20.0 and 66.0 mg kg-1 soil for the PVA, and 40.0 and 132.0 mg kg-1 soil for the VLA. The coefficients of variation (CV) were equal to 2.32 and 2.69 for PVA and TH soils, respectively. The methodology resulted in low organic solvent consumption and cleaner extracts, as well as no purification steps for chromatographic analysis were required. The parameters evaluated in the validation process indicated that the ESL methodology is efficient for the extraction of picloram residues in soils, with low limits of detection and quantification.
Resumo:
Capillary electrophoresis method designed originally for the analysis of monosaccharides was validated using reference solutions of polydatin. The validation was conducted by studying and determining the concentration levels of LOD and LOQ and the range of linearity and by determining levels of uncertainty in respect to repeatability and reproducibility. The reliability of the gained results is also discussed. A guide with recommendations considering the validation and overall design of analysis sequences with CE is also produced as a result of this study.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The objective of the present study was to validate the transit-time technique for long-term measurements of iliac and renal blood flow in rats. Flow measured with ultrasonic probes was confirmed ex vivo using excised arteries perfused at varying flow rates. An implanted 1-mm probe reproduced with accuracy different patterns of flow relative to pressure in freely moving rats and accurately quantitated the resting iliac flow value (on average 10.43 ± 0.99 ml/min or 2.78 ± 0.3 ml min-1 100 g body weight-1). The measurements were stable over an experimental period of one week but were affected by probe size (resting flows were underestimated by 57% with a 2-mm probe when compared with a 1-mm probe) and by anesthesia (in the same rats, iliac flow was reduced by 50-60% when compared to the conscious state). Instantaneous changes of iliac and renal flow during exercise and recovery were accurately measured by the transit-time technique. Iliac flow increased instantaneously at the beginning of mild exercise (from 12.03 ± 1.06 to 25.55 ± 3.89 ml/min at 15 s) and showed a smaller increase when exercise intensity increased further, reaching a plateau of 38.43 ± 1.92 ml/min at the 4th min of moderate exercise intensity. In contrast, exercise-induced reduction of renal flow was smaller and slower, with 18% and 25% decreases at mild and moderate exercise intensities. Our data indicate that transit-time flowmetry is a reliable method for long-term and continuous measurements of regional blood flow at rest and can be used to quantitate the dynamic flow changes that characterize exercise and recovery
Resumo:
Perevod" s'' nêmeckago
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
The purpose of the present study was to translate the Roland-Morris (RM) questionnaire into Brazilian-Portuguese and adapt and validate it. First 3 English teachers independently translated the original questionnaire into Brazilian-Portuguese and a consensus version was generated. Later, 3 other translators, blind to the original questionnaire, performed a back translation. This version was then compared with the original English questionnaire. Discrepancies were discussed and solved by a panel of 3 rheumatologists and the final Brazilian version was established (Brazil-RM). This version was then pretested on 30 chronic low back pain patients consecutively selected from the spine disorders outpatient clinic. In addition to the traditional clinical outcome measures, the Brazil-RM, a 6-point pain scale (from no pain to unbearable pain), and its numerical pain rating scale (PS) (0 to 5) and a visual analog scale (VAS) (0 to 10) were administered twice by one interviewer (1 week apart) and once by one independent interviewer. Spearman's correlation coefficient (SCC) and intraclass correlation coefficient (ICC) were computed to assess test-retest and interobserver reliability. Cross-sectional construct validity was evaluated using the SCC. In the pretesting session, all questions were well understood by the patients. The mean time of questionnaire administration was 4 min and 53 s. The SCC and ICC were 0.88 (P<0.01) and 0.94, respectively, for the test-retest reliability and 0.86 (P<0.01) and 0.95, respectively, for interobserver reliability. The correlation coefficient was 0.80 (P<0.01) between the PS and Brazil-RM score and 0.79 (P<0.01) between the VAS and Brazil-RM score. We conclude that the Brazil-RM was successfully translated and adapted for application to Brazilian patients, with satisfactory reliability and cross-sectional construct validity.
Resumo:
The phonological loop is a component of the working memory system specifically involved in the processing and manipulation of limited amounts of information of a sound-based phonological nature. Phonological memory can be assessed by the Children's Test of Nonword Repetition (CNRep) in English speakers but not in Portuguese speakers due to phonotactic differences between the two languages. The objectives of the present study were: 1) to develop the Brazilian Children's Test of Pseudoword Repetition (BCPR), a Portuguese version of the CNRep, and 2) to validate the BCPR by correlation with the Auditory Digit Span Test from the Stanford-Binet Intelligence Scale. The BCPR and Digit Span were assessed in 182 children aged 4-10 years, 84 from Minas Gerais State (42 from a rural region) and 98 from the city of São Paulo. There are subject age and word length effects causing repetition accuracy to decline as a function of the number of syllables of the pseudowords. Correlations between BCPR and Digit Span forward (r = 0.50; P <= 0.01) and backward (r = 0.43; P <= 0.01) were found, and partial correlation indicated that higher BCPR scores were associated with higher Digit Span scores. BCPR appears to depend more on schooling, while Digit Span was more related to development. The results demonstrate that the BCPR is a reliable measure of phonological working memory, similar to the CNRep.
Resumo:
A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC) staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg) of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³), estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg), and reported as a percent (%) of the affected (left) hemisphere, correlated closely with volume (mm³, also reported as %) estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10) or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41). This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23), and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24). The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.
Resumo:
The objective of the present study was to translate the Kidney Disease Quality of Life - Short Form (KDQOL-SF™1.3) questionnaire into Portuguese to adapt it culturally and validate it for the Brazilian population. The KDQOL-SF was translated into Portuguese and back-translated twice into English. Patient difficulties in understanding the questionnaire were evaluated by a panel of experts and solved. Measurement properties such as reliability and validity were determined by applying the questionnaire to 94 end-stage renal disease patients on chronic dialysis. The Nottingham Health Profile Questionnaire, the Karnofsky Performance Scale and the Kidney Disease Questionnaire were administered to test validity. Some activities included in the original instrument were considered to be incompatible with the activities usually performed by the Brazilian population and were replaced. The mean scores for the 19 components of the KDQOL-SF questionnaire in Portuguese ranged from 22 to 91. The components "Social support" and "Dialysis staff encouragement" had the highest scores (86.7 and 90.8, respectively). The test-retest reliability and the inter-observer reliability of the instrument were evaluated by the intraclass correlation coefficient. The coefficients for both reliability tests were statistically significant for all scales of the KDQOL-SF (P < 0.001), ranging from 0.492 to 0.936 for test-retest reliability and from 0.337 to 0.994 for inter-observer reliability. The Cronbach's alpha coefficient was higher than 0.80 for most of components. The Portuguese version of the KDQOL-SF questionnaire proved to be valid and reliable for the evaluation of quality of life of Brazilian patients with end-stage renal disease on chronic dialysis.
Resumo:
The objective of the present study was to investigate the psychometric properties and cross-cultural validity of the Beck Depression Inventory (BDI) among ethnic Chinese living in the city of São Paulo, Brazil. The study was conducted on 208 community individuals. Reliability and discriminant analysis were used to test the psychometric properties and validity of the BDI. Principal component analysis was performed to assess the BDI's factor structure for the total sample and by gender. The mean BDI score was lower (6.74, SD = 5.98) than observed in Western counterparts and showed no gender difference, good internal consistency (Cronbach's alpha 0.82), and high discrimination of depressive symptoms (75-100%). Factor analysis extracted two factors for the total sample and each gender: cognitive-affective dimension and somatic dimension. We conclude that depressive symptoms can be reliably assessed by the BDI in the Brazilian Chinese population, with a validity comparable to that for international studies. Indeed, cultural and measurement biases might have influenced the response of Chinese subjects.