872 resultados para New Venture Teams, Homophily, Performance, Conflict
Resumo:
The main objective of this study was to examine, what kind of investment strategies the leading European and North American pulp and paper industry companies (PPI) used in 1991-2003, and how the selected strategies affected their performance. The investment strategies were categorised in three classes including mergers and acquisitions, investments in new capacity and investments in existing capacity. The results showed that mergers and acquisitions represented the largest share of total investments in 1991-2003 followed by investments in existing capacity. PPI companies changed investment strategies over time by increasing the share of mergers and acquisitions, which decreased investments in new capacity especially among North American companies. According to the results, good asset quality and investments in new and existing capacity provided better profitability than often expensive acquisitions. Also the capacity decreases had a positive impact on profitability. Average asset quality and profitability were higher among European companies. The study concluded that in the long term the available value creating investment opportunities should limit capital expenditure levels, not the relation of capital expenditure to depreciation.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real-life regulatory frameworks by the developed methodology and further develop regulation model from the viewpoint of the directing signals. The results of this study have played a key role in the development of the Finnish regulatory model.
Resumo:
The primary objective is to identify the critical factors that have a natural impact on the performance measurement system. It is important to make correct decisions related to measurement systems, which are based on the complex business environment. The performance measurement system is combined with a very complex non-linear factor. The Six Sigma methodology is seen as one potential approach at every organisational level. It will be linked to the performance and financial measurement as well as to the analytical thinking on which the viewpoint of management depends. The complex systems are connected to the customer relationship study. As the primary throughput can be seen in a new well-defined performance measurement structure that will also be facilitated as will an analytical multifactor system. These critical factors should also be seen as a business innovation opportunity at the same time. This master's thesis has been divided into two different theoretical parts. The empirical part consists of both action-oriented and constructive research approaches with an empirical case study. The secondary objective is to seek a competitive advantage factor with a new analytical tool and the Six Sigma thinking. Process and product capabilities will be linked to the contribution of complex system. These critical barriers will be identified by the performance measuring system. The secondary throughput can be recognised as the product and the process cost efficiencies which throughputs are achieved with an advantage of management. The performance measurement potential is related to the different productivity analysis. Productivity can be seen as one essential part of the competitive advantage factor.
Resumo:
The Cherenkov light flashes produced by Extensive Air Showers are very short in time. A high bandwidth and fast digitizing readout, therefore, can minimize the influence of the background from the light of the night sky, and improve the performance in Cherenkov telescopes. The time structure of the Cherenkov image can further be used in single-dish Cherenkov telescopes as an additional parameter to reduce the background from unwanted hadronic showers. A description of an analysis method which makes use of the time information and the subsequent improvement on the performance of the MAGIC telescope (especially after the upgrade with an ultra fast 2 GSamples/s digitization system in February 2007) will be presented. The use of timing information in the analysis of the new MAGIC data reduces the background by a factor two, which in turn results in an enhancement of about a factor 1.4 of the flux sensitivity to point-like sources, as tested on observations of the Crab Nebula.
Resumo:
Aim The aim of this study was to test different modelling approaches, including a new framework, for predicting the spatial distribution of richness and composition of two insect groups. Location The western Swiss Alps. Methods We compared two community modelling approaches: the classical method of stacking binary prediction obtained fromindividual species distribution models (binary stacked species distribution models, bS-SDMs), and various implementations of a recent framework (spatially explicit species assemblage modelling, SESAM) based on four steps that integrate the different drivers of the assembly process in a unique modelling procedure. We used: (1) five methods to create bS-SDM predictions; (2) two approaches for predicting species richness, by summing individual SDM probabilities or by modelling the number of species (i.e. richness) directly; and (3) five different biotic rules based either on ranking probabilities from SDMs or on community co-occurrence patterns. Combining these various options resulted in 47 implementations for each taxon. Results Species richness of the two taxonomic groups was predicted with good accuracy overall, and in most cases bS-SDM did not produce a biased prediction exceeding the actual number of species in each unit. In the prediction of community composition bS-SDM often also yielded the best evaluation score. In the case of poor performance of bS-SDM (i.e. when bS-SDM overestimated the prediction of richness) the SESAM framework improved predictions of species composition. Main conclusions Our results differed from previous findings using community-level models. First, we show that overprediction of richness by bS-SDM is not a general rule, thus highlighting the relevance of producing good individual SDMs to capture the ecological filters that are important for the assembly process. Second, we confirm the potential of SESAM when richness is overpredicted by bS-SDM; limiting the number of species for each unit and applying biotic rules (here using the ranking of SDM probabilities) can improve predictions of species composition
Resumo:
Coxiella burnetii and members of the genus Rickettsia are obligate intracellular bacteria. Since cultivation of these organisms requires dedicated techniques, their diagnosis usually relies on serological or molecular biology methods. Immunofluorescence is considered the gold standard to detect antibody-reactivity towards these organisms. Here, we assessed the performance of a new automated epifluorescence immunoassay (InoDiag) to detect IgM and IgG against C. burnetii, Rickettsia typhi and Rickettsia conorii. Samples were tested with the InoDiag assay. A total of 213 sera were tested, of which 63 samples from Q fever, 20 from spotted fever rickettsiosis, 6 from murine typhus and 124 controls. InoDiag results were compared to micro-immunofluorescence. For acute Q fever, the sensitivity of phase 2 IgG was only of 30% with a cutoff of 1 arbitrary unit (AU). In patients with acute Q fever with positive IF IgM, sensitivity reached 83% with the same cutoff. Sensitivity for chronic Q fever was 100% whereas sensitivity for past Q fever was 65%. Sensitivity for spotted Mediterranean fever and murine typhus were 91% and 100%, respectively. Both assays exhibited a good specificity in control groups, ranging from 79% in sera from patients with unrelated diseases or EBV positivity to 100% in sera from healthy patients. In conclusion, the InoDiag assay exhibits an excellent performance for the diagnosis of chronic Q fever but a very low IgG sensitivity for acute Q fever likely due to low reactivity of phase 2 antigens present on the glass slide. This defect is partially compensated by the detection of IgM. Because it exhibits a good negative predictive value, the InoDiag assay is valuable to rule out a chronic Q fever. For the diagnosis of rickettsial diseases, the sensitivity of the InoDiag method is similar to conventional immunofluorescence.
Resumo:
OBJECTIVE: To review and update the conceptual framework, indicator content and research priorities of the Organisation for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) project, after a decade of collaborative work. DESIGN: A structured assessment was carried out using a modified Delphi approach, followed by a consensus meeting, to assess the suite of HCQI for international comparisons, agree on revisions to the original framework and set priorities for research and development. SETTING: International group of countries participating to OECD projects. PARTICIPANTS: Members of the OECD HCQI expert group. RESULTS: A reference matrix, based on a revised performance framework, was used to map and assess all seventy HCQI routinely calculated by the OECD expert group. A total of 21 indicators were agreed to be excluded, due to the following concerns: (i) relevance, (ii) international comparability, particularly where heterogeneous coding practices might induce bias, (iii) feasibility, when the number of countries able to report was limited and the added value did not justify sustained effort and (iv) actionability, for indicators that were unlikely to improve on the basis of targeted policy interventions. CONCLUSIONS: The revised OECD framework for HCQI represents a new milestone of a long-standing international collaboration among a group of countries committed to building common ground for performance measurement. The expert group believes that the continuation of this work is paramount to provide decision makers with a validated toolbox to directly act on quality improvement strategies.
Resumo:
This study was aimed to analyze and assess the use and perception of electronic health records (EHRs) by nurses. The study sample included 113 nurses from different shifts of primary health facilities in Catalonia, Spain, devoted to adult as well as pediatric outpatients using EHRs throughout the year 2010. A majority of the sample (87.5%) were women and 12.5% were men. The average age was 44.27 years and the average time working in primary healthcare was 47.15 months. A majority (80.4%) received specific training on the use of the EHR and 19.6% did not. The use of the application required side technical support (mean: 3.42) and it is considered necessary to learn more about the performance of the application (mean: 3.50). The relationship between the average ratings that nurses have about the EHR and age shows that there is no statistically significant linear relationship (r = - 0.002, p-value = 0.984). As to how long they have used the EHRs, there are significant differences (r= -0.304, p-value = 0.00), so the more time the nurse takes using the EHR, the greater degree of satisfaction is shown. In addition, there are significant differences between nurses" perceptions regarding the EHR and gender (t = - 0.421, p-value = 0.675). Nurses assessed as positive the contribution of the EHRs in their nursing care day work (average score: 2.55/5). Considering that the usability of the EHR device is assessed as satisfactory, the results of the perception of nurses show that we must also take into account the training and emphasize the need for a side technical support in the implementation process of the EHR. Doing so, the positive perception that nurses have in regard to information and communication technology in general and with respect to the EHR in particular may be increased.
Resumo:
This study was aimed to analyze and assess the use and perception of electronic health records (EHRs) by nurses. The study sample included 113 nurses from different shifts of primary health facilities in Catalonia, Spain, devoted to adult as well as pediatric outpatients using EHRs throughout the year 2010. A majority of the sample (87.5%) were women and 12.5% were men. The average age was 44.27 years and the average time working in primary healthcare was 47.15 months. A majority (80.4%) received specific training on the use of the EHR and 19.6% did not. The use of the application required side technical support (mean: 3.42) and it is considered necessary to learn more about the performance of the application (mean: 3.50). The relationship between the average ratings that nurses have about the EHR and age shows that there is no statistically significant linear relationship (r = - 0.002, p-value = 0.984). As to how long they have used the EHRs, there are significant differences (r= -0.304, p-value = 0.00), so the more time the nurse takes using the EHR, the greater degree of satisfaction is shown. In addition, there are significant differences between nurses" perceptions regarding the EHR and gender (t = - 0.421, p-value = 0.675). Nurses assessed as positive the contribution of the EHRs in their nursing care day work (average score: 2.55/5). Considering that the usability of the EHR device is assessed as satisfactory, the results of the perception of nurses show that we must also take into account the training and emphasize the need for a side technical support in the implementation process of the EHR. Doing so, the positive perception that nurses have in regard to information and communication technology in general and with respect to the EHR in particular may be increased.
Resumo:
PURPOSE: We aimed to a) introduce a new Test to Exhaustion Specific to Tennis (TEST) and compare performance (test duration) and physiological responses to those obtained during the 20-m multistage shuttle test (MSST), and b) determine to which extent those variables correlate with performance level (tennis competitive ranking) for both test procedures. METHODS: Twenty-seven junior players (8 males, 19 females) members of the national teams of the French Tennis Federation completed MSST and TEST, including elements of the game (ball hitting, intermittent activity, lateral displacement), in a randomized order. Cardiorespiratory responses were compared at submaximal (respiratory compensation point) and maximal loads between the two tests. RESULTS: At the respiratory compensation point oxygen uptake (50.1 +/- 4.7 vs. 47.5 +/- 4.3 mL.min-1.kg-1, p = 0.02), but not minute ventilation and heart rate, was higher for TEST compared to MSST. However, load increment and physiological responses at exhaustion did not differ between the two tests. Players' ranking correlated negatively with oxygen uptake measured at submaximal and maximal loads for both TEST (r = -0.41; p = 0.01 and -0.55; p = 0.004) and MSST (r = -0.38; P = 0.05 and -0.51; p = 0.1). CONCLUSION: Using TEST provides a tennis-specific assessment of aerobic fitness and may be used to prescribe aerobic exercise in a context more appropriate to the game than MSST. Results also indicate that VO2 values both at submaximal and maximal load reached during TEST and MSST are moderate predictors of players competitive ranking.
Resumo:
Autologous blood transfusion (ABT) is an efficient way to increase sport performance. It is also the most challenging doping method to detect. At present, individual follow-up of haematological variables via the athlete biological passport (ABP) is used to detect it. Quantification of a novel hepatic peptide called hepcidin may be a new alternative to detect ABT. In this prospective clinical trial, healthy subjects received a saline injection for the control phase, after which they donated blood that was stored and then transfused 36 days later. The impact of ABT on hepcidin as well as haematological parameters, iron metabolism, and inflammation markers was investigated. Blood transfusion had a particularly marked effect on hepcidin concentrations compared to the other biomarkers, which included haematological variables. Hepcidin concentrations increased significantly: 12 hr and 1 day after blood reinfusion, these concentrations rose by seven- and fourfold, respectively. No significant change was observed in the control phase. Hepcidin quantification is a cost-effective strategy that could be used in an "ironomics" strategy to improve the detection of ABT. Am. J. Hematol. 91:467-472, 2016. © 2016 Wiley Periodicals, Inc.
Resumo:
High-resolution mass spectrometry (HRMS) has been associated with qualitative and research analysis and QQQ-MS with quantitative and routine analysis. This view is now challenged and for this reason, we have evaluated the quantitative LC-MS performance of a new high-resolution mass spectrometer (HRMS), a Q-orbitrap-MS, and compared the results obtained with a recent triple-quadrupole MS (QQQ-MS). High-resolution full-scan (HR-FS) and MS/MS acquisitions have been tested with real plasma extracts or pure standards. Limits of detection, dynamic range, mass accuracy and false positive or false negative detections have been determined or investigated with protease inhibitors, tyrosine kinase inhibitors, steroids and metanephrines. Our quantitative results show that today's available HRMS are reliable and sensitive quantitative instruments and comparable to QQQ-MS quantitative performance. Taking into account their versatility, user-friendliness and robustness, we believe that HRMS should be seen more and more as key instruments in quantitative LC-MS analyses. In this scenario, most targeted LC-HRMS analyses should be performed by HR-FS recording virtually "all" ions. In addition to absolute quantifications, HR-FS will allow the relative quantifications of hundreds of metabolites in plasma revealing individual's metabolome and exposome. This phenotyping of known metabolites should promote HRMS in clinical environment. A few other LC-HRMS analyses should be performed in single-ion-monitoring or MS/MS mode when increased sensitivity and/or detection selectivity will be necessary.
Resumo:
BACKGROUND: For the past decade (18)F-fluoro-ethyl-l-tyrosine (FET) and (18)F-fluoro-deoxy-glucose (FDG) positron emission tomography (PET) have been used for the assessment of patients with brain tumor. However, direct comparison studies reported only limited numbers of patients. Our purpose was to compare the diagnostic performance of FET and FDG-PET. METHODS: We examined studies published between January 1995 and January 2015 in the PubMed database. To be included the study should: (i) use FET and FDG-PET for the assessment of patients with isolated brain lesion and (ii) use histology as the gold standard. Analysis was performed on a per patient basis. Study quality was assessed with STARD and QUADAS criteria. RESULTS: Five studies (119 patients) were included. For the diagnosis of brain tumor, FET-PET demonstrated a pooled sensitivity of 0.94 (95% CI: 0.79-0.98) and pooled specificity of 0.88 (95% CI: 0.37-0.99), with an area under the curve of 0.96 (95% CI: 0.94-0.97), a positive likelihood ratio (LR+) of 8.1 (95% CI: 0.8-80.6), and a negative likelihood ratio (LR-) of 0.07 (95% CI: 0.02-0.30), while FDG-PET demonstrated a sensitivity of 0.38 (95% CI: 0.27-0.50) and specificity of 0.86 (95% CI: 0.31-0.99), with an area under the curve of 0.40 (95% CI: 0.36-0.44), an LR+ of 2.7 (95% CI: 0.3-27.8), and an LR- of 0.72 (95% CI: 0.47-1.11). Target-to-background ratios of either FDG or FET, however, allow distinction between low- and high-grade gliomas (P > .11). CONCLUSIONS: For brain tumor diagnosis, FET-PET performed much better than FDG and should be preferred when assessing a new isolated brain tumor. For glioma grading, however, both tracers showed similar performances.
Resumo:
The fundamental question in the transitional economies of the former Eastern Europe and Soviet Union has been whether privatisation and market liberalisation have had an effect on the performance of former state-owned enterprises. This study examines the effect of privatisation, capital market discipline, price liberalisation and international price exposure on the restructuring of large Russian enterprises. The performance indicators are sales, profitability, labour productivity and stock market valuations. The results do not show performance differences between state-owned and privatised enterprises. On the other hand, the expansion of the de novo private sector has been strong. New enterprises have significantly higher sales growth, profitability and labour productivity. However, the results indicate a diminishing effect of ownership. The international stock market listing has a significant positive effect on profitability, while the effect of domestic stock market listing is insignificant. The international price exposure has a significant positive increasing effect on profitability and labour productivity. International enterprises have higher profitability only when operating on price liberalised markets, however. The main results of the study are strong evidence on the positive effects of international linkages on the enterprise restructuring and the higher than expected role of new enterprises in the Russian economy.