907 resultados para Multifactor performance measurement
Resumo:
Design aspects of the Transversally Laminated Anisotropic (TLA) Synchronous Reluctance Motor (SynRM) are studied and the machine performance analysis compared to the Induction Motor (IM) is done. The SynRM rotor structure is designed and manufactured for a30 kW, four-pole, three-phase squirrel cage induction motor stator. Both the IMand SynRM were supplied by a sensorless Direct Torque Controlled (DTC) variablespeed drive. Attention is also paid to the estimation of the power range where the SynRM may compete successfully with a same size induction motor. A technicalloss reduction comparison between the IM and SynRM in variable speed drives is done. The Finite Element Method (FEM) is used to analyse the number, location and width of flux barriers used in a multiple segment rotor. It is sought for a high saliency ratio and a high torque of the motor. It is given a comparison between different FEM calculations to analyse SynRM performance. The possibility to take into account the effect of iron losses with FEM is studied. Comparison between the calculated and measured values shows that the design methods are reliable. A new application of the IEEE 112 measurement method is developed and used especially for determination of stray load losses in laboratory measurements. The study shows that, with some special measures, the efficiency of the TLA SynRM is equivalent to that of a high efficiency IM. The power factor of the SynRM at rated load is smaller than that of the IM. However, at lower partial load this difference decreases and this, probably, brings that the SynRM gets a better power factor in comparison with the IM. The big rotor inductance ratio of the SynRM allows a good estimating of the rotor position. This appears to be very advantageous for the designing of the rotor position sensor-less motor drive. In using the FEM designed multi-layer transversally laminated rotor with damper windings it is possible to design a directly network driven motor without degrading the motorefficiency or power factor compared to the performance of the IM.
Resumo:
Electronic canopy characterization is an important issue in tree crop management. Ultrasonic and optical sensors are the most used for this purpose. The objective of this work was to assess the performance of an ultrasonic sensor under laboratory and field conditions in order to provide reliable estimations of distance measurements to apple tree canopies. To this purpose, a methodology has been designed to analyze sensor performance in relation to foliage ranging and to interferences with adjacent sensors when working simultaneously. Results show that the average error in distance measurement using the ultrasonic sensor in laboratory conditions is ±0.53 cm. However, the increase of variability in field conditions reduces the accuracy of this kind of sensors when estimating distances to canopies. The average error in such situations is ±5.11 cm. When analyzing interferences of adjacent sensors 30 cm apart, the average error is ±17.46 cm. When sensors are separated 60 cm, the average error is ±9.29 cm. The ultrasonic sensor tested has been proven to be suitable to estimate distances to the canopy in field conditions when sensors are 60 cm apart or more and could, therefore, be used in a system to estimate structural canopy parameters in precision horticulture.
Resumo:
Electrical impedance tomography (EIT) allows the measurement of intra-thoracic impedance changes related to cardiovascular activity. As a safe and low-cost imaging modality, EIT is an appealing candidate for non-invasive and continuous haemodynamic monitoring. EIT has recently been shown to allow the assessment of aortic blood pressure via the estimation of the aortic pulse arrival time (PAT). However, finding the aortic signal within EIT image sequences is a challenging task: the signal has a small amplitude and is difficult to locate due to the small size of the aorta and the inherent low spatial resolution of EIT. In order to most reliably detect the aortic signal, our objective was to understand the effect of EIT measurement settings (electrode belt placement, reconstruction algorithm). This paper investigates the influence of three transversal belt placements and two commonly-used difference reconstruction algorithms (Gauss-Newton and GREIT) on the measurement of aortic signals in view of aortic blood pressure estimation via EIT. A magnetic resonance imaging based three-dimensional finite element model of the haemodynamic bio-impedance properties of the human thorax was created. Two simulation experiments were performed with the aim to (1) evaluate the timing error in aortic PAT estimation and (2) quantify the strength of the aortic signal in each pixel of the EIT image sequences. Both experiments reveal better performance for images reconstructed with Gauss-Newton (with a noise figure of 0.5 or above) and a belt placement at the height of the heart or higher. According to the noise-free scenarios simulated, the uncertainty in the analysis of the aortic EIT signal is expected to induce blood pressure errors of at least ± 1.4 mmHg.
Resumo:
Imaging systems have developed latest years and developing is still continuing following years. Manufacturers of imaging systems give promises for the quality of the performance of imaging systems to advertise their products. Promises for the quality of the performance are often so good that they will not be tested in normal usage. The main target in this research is to evaluate the quality of the performance of two imaging systems: Scanner and CCD color camera. Optical measurement procedures were planned to evaluate the quality of imaging performances. Other target in this research is to evaluate calibration programs for the camera and the scanner. Measuring targets had to choose to evaluate the quality of imaging performances. Manufacturers have given definitions for targets. The third task in this research is to evaluate and consider how good measuring targets are.
Resumo:
After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real-life regulatory frameworks by the developed methodology and further develop regulation model from the viewpoint of the directing signals. The results of this study have played a key role in the development of the Finnish regulatory model.
Resumo:
Centrifugal compressors are widely used for example in refrigeration processes, the oil and gas industry, superchargers, and waste water treatment. In this work, five different vaneless diffusers and six different vaned diffusers are investigated numerically. The vaneless diffusers vary only by their diffuser width, so that four of the geometries have pinch implemented to them. Pinch means a decrease in the diffuser width. Four of the vaned diffusers have the same vane turning angle and a different number of vanes, and two have different vane turning angles. The flow solver used to solve the flow fields is Finflo, which is a Navier-Stokes solver. All the cases are modeled with the Chien's k – έ- turbulence model, and selected cases are modeled also with the k – ώ-SST turbulence model. All five vaneless diffusers and three vaned diffusers are investigated also experimentally. For each construction, the compressor operating map is measured according to relevant standards. In addition to this, the flow fields before and after the diffuser are measured with static and total pressure, flow angle and total temperature measurements. When comparing the computational results to the measured results, it is evident that the k – ώ-SST turbulence model predicts the flow fields better. The simulation results indicate that it is possible to improve the efficiency with the pinch, and according to the numerical results, the two best geometries are the ones with most pinch at the shroud. These geometries have approximately 4 percentage points higher efficiency than the unpinched vaneless diffusers. The hub pinch does not seem to have any major benefits. In general, the pinches make the flow fields before and after the diffuser more uniform. The pinch also seems to improve the impeller efficiency. This is down to two reasons. The major reason is that the pinch decreases the size of slow flow and possible backflow region located near the shroud after the impeller. Secondly, the pinches decrease the flow velocity in the tip clearance, leading to a smaller tip leakage flow and therefore slightly better impeller efficiency. Also some of the vaned diffusers improve the efficiency, the increment being 1...3 percentage points, when compared to the vaneless unpinched geometry. The measurement results confirm that the pinch is beneficial to the performance of the compressor. The flow fields are more uniform with the pinched cases, and the slow flow regions are smaller. The peak efficiency is approximately 2 percentage points and the design point efficiency approximately 4 percentage points higher with the pinched geometries than with the un- pinched geometry. According to the measurements, the two best geometries are the ones with the most pinch at the shroud, the case with the pinch only at the shroud being slightly better of the two. The vaned diffusers also have better efficiency than the vaneless unpinched geometries. However, the pinched cases have even better efficiencies. The vaned diffusers narrow the operating range considerably, whilst the pinch has no significant effect on the operating range.
Resumo:
Cutting of thick section stainless steel and mild steel, and medium section aluminium using the high power ytterbium fibre laser has been experimentally investigated in this study. Theoretical models of the laser power requirement for cutting of a metal workpiece and the melt removal rate were also developed. The calculated laser power requirement was correlated to the laser power used for the cutting of 10 mm stainless steel workpiece and 15 mm mild steel workpiece using the ytterbium fibre laser and the CO2 laser. Nitrogen assist gas was used for cutting of stainless steel and oxygen was used for mild steel cutting. It was found that the incident laser power required for cutting at a given cutting speed was lower for fibre laser cutting than for CO2 laser cutting indicating a higher absorptivity of the fibre laser beam by the workpiece and higher melting efficiency for the fibre laser beam than for the CO2 laser beam. The difficulty in achieving an efficient melt removal during high speed cutting of the 15 mmmild steel workpiece with oxygen assist gas using the ytterbium fibre laser can be attributed to the high melting efficiency of the ytterbium fibre laser. The calculated melt flow velocity and melt film thickness correlated well with the location of the boundary layer separation point on the 10 mm stainless steel cut edges. An increase in the melt film thickness caused by deceleration of the melt particles in the boundary layer by the viscous shear forces results in the flow separation. The melt flow velocity increases with an increase in assist gas pressure and cut kerf width resulting in a reduction in the melt film thickness and the boundary layer separation point moves closer to the bottom cut edge. The cut edge quality was examined by visual inspection of the cut samples and measurement of the cut kerf width, boundary layer separation point, cut edge squareness (perpendicularity) deviation, and cut edge surface roughness as output quality factors. Different regions of cut edge quality in 10 mm stainless steel and 4 mm aluminium workpieces were defined for different combinations of cutting speed and laserpower.Optimization of processing parameters for a high cut edge quality in 10 mmstainless steel was demonstrated
Resumo:
More and more innovations currently being commercialized exhibit network effects, in other words, the value of using the product increases as more and more people use the same or compatible products. Although this phenomenon has been the subject of much theoretical debate in economics, marketing researchers have been slow to respond to the growing importance of network effects in new product success. Despite an increase in interest in recent years, there is no comprehensive view on the phenomenon and, therefore, there is currently incomplete understanding of the dimensions it incorporates. Furthermore, there is wide dispersion in operationalization, in other words, the measurement of network effects, and currently available approaches have various shortcomings that limit their applicability, especially in marketing research. Consequently, little is known today about how these products fare on the marketplace and how they should be introduced in order to maximize their chances of success. Hence, the motivation for this study was driven by the need to increase our knowledge and understanding of the nature of network effects as a phenomenon, and of their role in the commercial success of new products. This thesis consists of two parts. The first part comprises a theoretical overview of the relevant literature, and presents the conclusions of the entire study. The second part comprises five complementary, empirical research publications. Quantitative research methods and two sets of quantitative data are utilized. The results of the study suggest that there is a need to update both the conceptualization and the operationalization of the phenomenon of network effects. Furthermore, there is a need for an augmented view on customers’ perceived value in the context of network effects, given that the nature of value composition has major implications for the viability of such products in the marketplace. The role of network effects in new product performance is not as straightforward as suggested in the existing theoretical literature. The overwhelming result of this study is that network effects do not directly influence product success, but rather enhance or suppress the influence of product introduction strategies. The major contribution of this study is in conceptualizing the phenomenon of network effects more comprehensively than has been attempted thus far. The study gives an augmented view of the nature of customer value in network markets, which helps in explaining why some products thrive on these markets whereas others never catch on. Second, the study discusses shortcomings in prior literature in the way it has operationalized network effects, suggesting that these limitations can be overcome in the research design. Third, the study provides some much-needed empirical evidence on how network effects, product introduction strategies, and new product performance are associated. In general terms, this thesis adds to our knowledge of how firms can successfully leverage network effects in product commercialization in order to improve market performance.
Resumo:
The objective of this thesis was to study the role of capabilities in purchasing and supply management. For the pre-understanding of the research topic, purchasing and supply management development and the multidimensional, unstructured and complex nature of purchasing and supply management performance was studied in literature review. In addition, a capability-based purchasing and supply management performance framework were researched and structured for the empirical research. Due to the unstructured nature of the research topic, the empirical research is three-pronged in this study including three different research methods: the Delphi method, semi-structured interview, and case research. As a result, the purchasing and supply management capability assessment tool was structured to measure current level of capabilities and impact of capabilities on purchasing and supply management performance. The final results indicate that capabilities are enablers of purchasing and supply management performance, and therefore critical to purchasing and supply performance.
Resumo:
Quantifying soil evaporation is required on studies of soil water balance and applications aiming to improve water use efficiency by crops. The performance of a microlysimeter (ML) to measure soil evaporation under irrigation and non-irrigation was evaluated. The MLs were constructed using PVC tubes, with dimensions of 100 mm inner diameter, 150 mm depth and 2.5 mm wall thickness. Four MLs were uniformly distributed on the soil surface of two weighing lysimeters conducted under bare soil, previously installed at Iapar, in Londrina, PR, Brazil. The lysimeters had 1.4 m width, 1.9 m length and 1.3 m depth and were conducted with and without irrigation. Evaporation measurements by MLs (E ML) were compared with measurements by lysimeters (E L) during four different periods in the year. Differences between E ML and E L were small either for low or high atmospheric demand and also for either irrigated or non-irrigated conditions, which indicates that the ML tested here is suitable for measurement of soil evaporation.
Resumo:
Intellectual assets have attained continuous attention in the academic field, as they are vital sources of competitive advantage and organizational performance in the contemporary knowledge intensive business environment. Intellectual capital measurement is quite thoroughly addressed in the accounting literature. However, the purpose of the measurement is to support the management of intellectual assets, but the reciprocal relationship between measurement and management has not been comprehensively considered in the literature. The theoretical motivation for this study rose from this paradox, as in order to maximise the effectiveness of knowledge management the two initiatives need to be closely integrated. The research approach of this interventionist case study is constructive. The objective is to develop the case organization’s knowledge management and intellectual capital measurement in a way that they would be closely integrated and the measurement would support the management of intellectual assets. The case analysis provides valuable practical considerations about the integration and related issues as the case company is a knowledge intensive organization in which the know-how of the employees is the central competitive asset and therefore, the management and measurement of knowledge are essential for its future success. The results suggest that the case organization is confronting challenges in managing knowledge. In order to appropriately manage knowledge processes and control the related risks, support from intellectual capital measurement is required. However, challenges in measuring intellectual capital, especially knowledge, could be recognized in the organization. By reflecting the knowledge management situation and the constructed strategy map, a new intellectual measurement system was developed for the case organization. The construction of the system as well as its indicators can be perceived to contribute to the literature, emphasizing of the importance of properly considering the organization’s knowledge situation in developing an intellectual capital measurement system.
Resumo:
Corporate events as an effective part of marketing communications strategy seem to be underestimated in Finnish companies. In the rest of the Europe and the USA, investments in events are increasing, and their share of the marketing budget is significant. The growth of the industry may be explained by the numerous advantages and opportunities that events provide for attendees, such as face-to-face marketing, enhancing corporate image, building relationships, increasing sales, and gathering information. In order to maximize these benefits and return on investment, specific measurement strategies are required, yet there seems to exist a lack of understanding of how event performance should be perceived or evaluated. To address this research gap, this research attempts to describe the perceptions of and strategies for evaluating corporate event performance in the Finnish events industry. First, corporate events are discussed in terms of definitions and characteristics, typologies, and their role in marketing communications. Second, different theories on evaluating corporate event performance are presented and analyzed. Third, a conceptual model is presented based on the literature review, which serves as a basis for the empirical research conducted as an online questionnaire. The empirical findings are to a great extent in line with the existing literature, suggesting that there remains a lack of understanding corporate event performance evaluation, and challenges arise in determining appropriate measurement procedures for it. Setting clear objectives for events is a significant aspect of the evaluation process, since the outcomes of events are usually evaluated against the preset objectives. The respondent companies utilize many of the individual techniques that were recognized in theory, such as calculating the number of sales leads and delegates. However, some of the measurement tools may require further investments and resources, thus restricting their application especially in smaller companies. In addition, there seems to be a lack of knowledge of the most appropriate methods in different contexts, which take into account the characteristics of the organizing party as well as the size and nature of the event. The lack of inhouse expertise enhances the need for third-party service-providers in solving problems of corporate event measurement.
Resumo:
The excretion ratio of lactulose/mannitol in urine has been used to assess the extension of malabsorption and impairment of intestinal permeability. The recovery of lactulose and mannitol in urine was employed to evaluate intestinal permeability in children with and without diarrhea. Lactulose and mannitol probes were measured using high-performance liquid chromatography with pulsed amperometric detection (HPLC-PAD). Two groups of solutions containing 60 µM sugars were prepared. Group I consisted of glucosamine, mannitol, melibiose and lactulose, and group II of inositol, sorbitol, glucose and lactose. In the study of intra-experiment variation, a sample of 50 µl from each group was submitted to 4 successive determinations. The recovered amounts and retention times of each sugar showed a variation <2 and 1%, respectively. The estimated recovery was >97%. In the study of inter-experiment variation, we prepared 4 independent samples from groups I and II at the following concentrations: 1.0, 0.3, 0.1, 0.03 and 0.01 mM. The amounts of the sugars recovered varied by <10%, whereas the retention times showed an average variation <1%. The linear correlation coefficients were >99%. Retention (k'), selectivity (a) and efficiency (N) were used to assess the chromatographic conditions. All three parameters were in the normal range. Children with diarrhea presented a greater lactulose/mannitol ratio compared to children without diarrhea.
Resumo:
Although echocardiography has been used in rats, few studies have determined its efficacy for estimating myocardial infarct size. Our objective was to estimate the myocardial infarct size, and to evaluate anatomic and functional variables of the left ventricle. Myocardial infarction was produced in 43 female Wistar rats by ligature of the left coronary artery. Echocardiography was performed 5 weeks later to measure left ventricular diameter and transverse area (mean of 3 transverse planes), infarct size (percentage of the arc with infarct on 3 transverse planes), systolic function by the change in fractional area, and diastolic function by mitral inflow parameters. The histologic measurement of myocardial infarction size was similar to the echocardiographic method. Myocardial infarct size ranged from 4.8 to 66.6% when determined by histology and from 5 to 69.8% when determined by echocardiography, with good correlation (r = 0.88; P < 0.05; Pearson correlation coefficient). Left ventricular diameter and mean diastolic transverse area correlated with myocardial infarct size by histology (r = 0.57 and r = 0.78; P < 0.0005). The fractional area change ranged from 28.5 ± 5.6 (large-size myocardial infarction) to 53.1 ± 1.5% (control) and correlated with myocardial infarct size by echocardiography (r = -0.87; P < 0.00001) and histology (r = -0.78; P < 00001). The E/A wave ratio of mitral inflow velocity for animals with large-size myocardial infarction (5.6 ± 2.7) was significantly higher than for all others (control: 1.9 ± 0.1; small-size myocardial infarction: 1.9 ± 0.4; moderate-size myocardial infarction: 2.8 ± 2.3). There was good agreement between echocardiographic and histologic estimates of myocardial infarct size in rats.
Resumo:
The purpose of this research was to define content marketing and to discover how content marketing performance can be measured especially on YouTube. Further, the aim was to find out what companies are doing to measure content marketing and what kind of challenges they face in the process. In addition, preferences concerning the measurement were examined. The empirical part was conducted through multiple-case study and cross-case analysis methods. The qualitative data was collected from four large companies in Finnish food and drink industry through semi-structured phone interviews. As a result of this research, a new definition for content marketing was derived. It is suggested that return on objective, or in this case, brand awareness and engagement are used as the main metrics of content marketing performance on YouTube. The major challenge is the nature of the industry, as companies cannot connect the outcome directly to sales.