994 resultados para Software measurement
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Reaching a consensus in terms of interchangeability and utility (i.e., disease detection/monitoring) of a medical device is the eventual aim of repeatability and agreement studies. The aim of the tolerance and relative utility indices described in this report is to provide a methodology to compare change in clinical measurement noise between different populations (repeatability) or measurement methods (agreement), so as to highlight problematic areas. No longitudinal data are required to calculate these indices. Both indices establish a metric of least to most effected across all parameters to facilitate comparison. If validated, these indices may prove useful tools when combining reports and forming the consensus required in the validation process for software updates and new medical devices.
Resumo:
BACKGROUND: Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM: Our aim was to challenge the validity of these software algorithms. METHODS: We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS: In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION: We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes. Pediatr Pulmonol. 2015; 50:970-977. © 2015 Wiley Periodicals, Inc.
Resumo:
This thesis presents the calibration and comparison of two systems, a machine vision system that uses 3 channel RGB images and a line scanning spectral system. Calibration. is the process of checking and adjusting the accuracy of a measuring instrument by comparing it with standards. For the RGB system self-calibrating methods for finding various parameters of the imaging device were developed. Color calibration was done and the colors produced by the system were compared to the known colors values of the target. Software drivers for the Sony Robot were also developed and a mechanical part to connect a camera to the robot was also designed. For the line scanning spectral system, methods for the calibrating the alignment of the system and the measurement of the dimensions of the line scanned by the system were developed. Color calibration of the spectral system is also presented.
Resumo:
The goal of this research – which is to critically analyze current theories and methods of intangible assets evaluation and potentially develop and test new methodology based on the practical example(s) in the IT industry. Having this goal in mind the main research questions in this paper will be: What are advantages and disadvantages of the current practices of measurement intellectual capital or valuation of intangible assets? How to properly measure intellectual capital in IT? Resulting method exhibits a new unique approach to the IC measurement and potentially even larger field of application. Despite the fact that in this particular research, I focused my attention on IT (Software and Internet services cluster – to be exact), the logic behind the method is applicable within any industry since the method is designed to be fully compliant with measurement theory and thus can be properly scaled for any application. Building a new method is a difficult and iterative process: in the current iteration the method stands out as rather a theoretical concept rather than a business tool, however even current concept totally fulfills its purpose as a benchmarking tool for measuring intellectual capital in IT industry.
Resumo:
This thesis examines how content marketing is used in B2B customer acquisition and how content marketing performance measurement system is built and utilized in this context. Literature related to performance measurement, branding and buyer behavior is examined in the theoretical part in order to identify the elements influence on content marketing performance measurement design and usage. Qualitative case study is chosen in order to gain deep understanding of the phenomenon studied. The case company is a Finnish software vendor, which operates in B2B markets and has practiced content marketing for approximately two years. The in-depth interviews were conducted with three employees from marketing department. According to findings content marketing performance measurement system’s infrastructure is based on target market’s decision making processes, company’s own customer acquisition process, marketing automation tool and analytics solutions. The main roles of content marketing performance measurement system are measuring performance, strategy management and learning and improvement. Content marketing objectives in the context of customer acquisition are enhancing brand awareness, influencing brand attitude and lead generation. Both non-financial and financial outcomes are assessed by single phase specific metrics, phase specific overall KPIs and ratings related to lead’s involvement.
Resumo:
Marketing and finance are both facing challenges in the constantly changing business environment. Finance is challenged to change its role from cost control to value-adding business partner while marketing needs to be able to demonstrate its accountability so how it contributes to firm performance. Finance is the key partner for marketing to prove its impact by helping marketing to measure its actions. By doing so, finance can also emphasize its business partner role. There is not a lot of research conducted of the relationship between marketing and finance departments. The aim of this study is to investigate how the professional differences of marketing and finance and their forms of cooperation affect marketing performance measurement. Literature of marketing and finance disciplines, their cooperation, performance implications of their interface as well as the roles of marketing performance measurement, performance measurement system and measures were reviewed. This research was conducted as a qualitative case study among senior management of marketing and finance in the sporting goods company. The data collected through semi-structured interviews, participant observation and secondary data was described and classified and connections were made. The results of the study show that the nature of marketing and finance disciplines has many effects on their cooperation and performance measurement. Due to the ambiguous nature of marketing, measuring its performance is still seen as a challenge but digitalization is helping the measurement. It was indicated that marketing and finance professionals need to have different skillsets in order to perform their roles effectively and thus cooperation is needed. Marketing performance needs to be measured with both financial and nonfinancial measures. Both marketing and finance interviewees highlighted the importance of marketing measures over financial measures. Measuring marketing performance comprehensively is seen as a challenge since marketing and finance cooperation is still shaped by the cost control and budget management roles, rather than performance measurement. We recognized three constraints affecting this cooperation and performance measurement: people, time and software. If marketing and finance would develop deeper cooperation, they could create comprehensive performance measurement system that improves organizational performance.
Resumo:
Abstract Software product metrics aim at measuring the quality of software. Modu- larity is an essential factor in software quality. In this work, metrics related to modularity and especially cohesion of the modules, are considered. The existing metrics are evaluated, and several new alternatives are proposed. The idea of cohesion of modules is that a module or a class should consist of related parts. The closely related principle of coupling says that the relationships between modules should be minimized. First, internal cohesion metrics are considered. The relations that are internal to classes are shown to be useless for quality measurement. Second, we consider external relationships for cohesion. A detailed analysis using design patterns and refactorings confirms that external cohesion is a better quality indicator than internal. Third, motivated by the successes (and problems) of external cohesion metrics, another kind of metric is proposed that represents the quality of modularity of software. This metric can be applied to refactorings related to classes, resulting in a refactoring suggestion system. To describe the metrics formally, a notation for programs is developed. Because of the recursive nature of programming languages, the properties of programs are most compactly represented using grammars and formal lan- guages. Also the tools that were used for metrics calculation are described.
Resumo:
Lichenologists and users of lichenometry have long used calipers or photogrammetry to measure the growth of crustose lichens. Now, digital photography and popular computer software provide methodological alternatives. This thesis developed and tested a new methodology for tracking change and growth of the lichen, Rhizocarpon geographicum. Adobe Photoshop CS3 Extended software and a photographic time series (1996,2003,2006 and 2007) were used to measure thallus diameter, area, prothallus width and areolae area in 115 small R. geographicum thalli (0.53-1049.88 mm2 ). Measures of 8 diameters per thallus showed that change in diameter was highly variable and is a weak index of growth. Thallus area was a reliable measure of growth (power correlation, R2 = 0.89). Rapid, highly irregular growth occurred in small thalli «30 mm2 ), and steady, uniform growth occurred in larger thalli (>30 mm2 ). This new methodology is tedious but can potentially generate accurate and precise measures for even the tiniest of lichens.
Resumo:
STUDY DESIGN: Concurrent validity between postural indices obtained from digital photographs (two-dimensional [2D]), surface topography imaging (three-dimensional [3D]), and radiographs. OBJECTIVE: To assess the validity of a quantitative clinical postural assessment tool of the trunk based on photographs (2D) as compared to a surface topography system (3D) as well as indices calculated from radiographs. SUMMARY OF BACKGROUND DATA: To monitor progression of scoliosis or change in posture over time in young persons with idiopathic scoliosis (IS), noninvasive and nonionizing methods are recommended. In a clinical setting, posture can be quite easily assessed by calculating key postural indices from photographs. METHODS: Quantitative postural indices of 70 subjects aged 10 to 20 years old with IS (Cobb angle, 15 degrees -60 degrees) were measured from photographs and from 3D trunk surface images taken in the standing position. Shoulder, scapula, trunk list, pelvis, scoliosis, and waist angles indices were calculated with specially designed software. Frontal and sagittal Cobb angles and trunk list were also calculated on radiographs. The Pearson correlation coefficients (r) was used to estimate concurrent validity of the 2D clinical postural tool of the trunk with indices extracted from the 3D system and with those obtained from radiographs. RESULTS: The correlation between 2D and 3D indices was good to excellent for shoulder, pelvis, trunk list, and thoracic scoliosis (0.81>r<0.97; P<0.01) but fair to moderate for thoracic kyphosis, lumbar lordosis, and thoracolumbar or lumbar scoliosis (0.30>r<0.56; P<0.05). The correlation between 2D and radiograph spinal indices was fair to good (-0.33 to -0.80 with Cobb angles and 0.76 for trunk list; P<0.05). CONCLUSION: This tool will facilitate clinical practice by monitoring trunk posture among persons with IS. Further, it may contribute to a reduction in the use of radiographs to monitor scoliosis progression.
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.
Resumo:
This Thesis project is a part of the all-round automation of production of concentrating solar PV/T systems Absolicon X10. ABSOLICON Solar Concentrator AB has been invented and started production of the prospective solar concentrated system Absolicon X10. The aims of this Thesis project are designing, assembling, calibrating and putting in operation the automatic measurement system intended to evaluate the shape of concentrating parabolic reflectors.On the basis of the requirements of the company administration and needs of real production process the operation conditions for the Laser testing rig were formulated. The basic concept to use laser radiation was defined.At the first step, the complex design of the whole system was made and division on the parts was defined. After the preliminary conducted simulations the function and operation conditions of the all parts were formulated.At the next steps, the detailed design of all the parts was conducted. Most components were ordered from respective companies. Some of the mechanical components were made in the workshop of the company. All parts of the Laser-testing rig were assembled and tested. Software part, which controls the Laser-testing rig work, was created on the LabVIEW basis. To tune and test software part the special simulator was designed and assembled.When all parts were assembled in the complete system, the Laser-testing rig was tested, calibrated and tuned.In the workshop of Absolicon AB, the trial measurements were conducted and Laser-testing rig was installed in the production line at the plant in Soleftea.
Resumo:
This Thesis project is a part of the research conducted in Solar industry. ABSOLICON Solar Concentrator AB has invented and started production of the prospective solar concentrated system Absolicon X10. The aims of this Thesis project are designing, assembling, calibrating and putting in operation the automatic measurement system intended to evaluate distribution of density of solar radiation in the focal line of the concentrated parabolic reflectors and to measure radiation from the artificial source of light being a calibration-testing tool.On the basis of the requirements of the company’s administration and needs of designing the concentrated reflectors the operation conditions for the Sun-Walker were formulated. As the first step, the complex design of the whole system was made and division on the parts was specified. After the preliminary conducted simulation of the functions and operation conditions of the all parts were formulated.As the next steps, the detailed design of all the parts was made. Most components were ordered from respective companies. Some of the mechanical components were made in the workshop of the company. All parts of the Sun-Walker were assembled and tested. The software part, which controls the Sun-Walker work and conducts measurements of solar irradiation, was created on the LabVIEW basis. To tune and test the software part, the special simulator was designed and assembled.When all parts were assembled in the complete system, the Sun-Walker was tested, calibrated and tuned.
Resumo:
Bergkvist insjön AB is a sawmill yard which is capable of producing 350,000 cubic meter of timber every year this requires lot of internal resources. Sawmill operations can be classified as unloading, sorting, storage and production of timber. In the company we have trucks arriving at random they have to be unloaded and sent back at the earliest to avoid queuing up of trucks creating a problem for truck owners. The sawmill yard has to operate with two log stackers that does several tasks including transporting the logs from trucks to measurement station where the logs will be sorted into classes and dropped into pockets from pockets to the sorted timber yard where they are stored and finally from there to sawmill for final processing. The main issue that needs to be answered here is the lining up trucks that are waiting to be unload, creating a problem for both sawmill as well as the truck owners and given huge production volume, it is certain that handling of resources is top priority. A key challenge in handling of resources would be unloading of trucks and finding a way to optimize internal resources.To address this problem i have experimented on different ways of using internal resources, i have designed different cases, in case 1 we have both the log stackers working on sawmill and measurement station. The main objective of having this case is to make sawmill and measurement station to work all the time. Then in case 2, i have divided the work between both the log stackers, one log stacker will be working on sawmill and pocket_control and second log stacker will be working on measurement station and truck. Then in case 3 we have only one log stacker working on all the agents, this case was designed to reduce cost of production, as the experiment cannot be done in real-time due to operational cost, for this purpose simulation is used, preliminary investigation into simulation results suggested that case 2 is the best option has it reduced waiting time of trucks considerably when compared with other cases and it showed 50% increase in optimizing internal resources.