882 resultados para task performance benchmarking


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We synthesize the literature on Chinese multinational enterprises (MNEs) and find that much of the prior research is based on as few as a dozen case studies of Chinese firms. They are so case-specific that it has led to a misplaced call for new theories to explain Chinese firms’ internationalization. In an attempt to better relate theory with empirical evidence, we examine the largest 500 Chinese manufacturing firms. We aim to find out the number of Chinese manufacturing firms to be true MNEs by definition, and to examine their financial performance relative to global peers using the financial benchmarking method. We develop our theoretical perspectives from new internalization theory. We find that there are only 49 Chinese manufacturing firms to be true MNEs, whereas the rest is purely domestic firms. Their performance is poor relative to global peers. Chinese MNEs have home country bound firm-specific advantages (FSAs), which are built upon home country-specific advantages (home CSAs). They have not yet developed advanced management capabilities through recombination with host CSAs. Essentially, they acquire foreign firms to increase their sales in domestic market, but they fail to be competitive internationally and to achieve superior performance in overseas operations. Our findings have important strategic implications for managers, public policy makers, and academic research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Biosim” is a simulation software which works to simulate the harvesting system.This system is able to design a model for any logistic problem with the combination of several objects so that the artificial system can show the performance of an individual model. The system will also describe the efficiency, possibility to be chosen for real life application of that particular model. So, when any one wish to setup a logistic model like- harvesting system, in real life he/she may be noticed about the suitable prostitution for his plants and factories as well as he/she may get information about the least number of objects, total time to complete the task, total investment required for his model, total amount of noise produced for his establishment in advance. It will produce an advance over view for his model. But “Biosim” is quite slow .As it is an object based system, it takes long time to make its decision. Here the main task is to modify the system so that it can work faster than the previous. So, the main objective of this thesis is to reduce the load of “Biosim” by making some modification of the original system as well as to increase its efficiency. So that the whole system will be faster than the previous one and performs more efficiently when it will be applied in real life. Theconcept is to separate the execution part of ”Biosim” form its graphical engine and run this separated portion in a third generation language platform. C++ is chosenhere as this external platform. After completing the proposed system, results with different models have been observed. The results show that, for any type of plants of fields, for any number of trucks, the proposed system is faster than the original system. The proposed system takes at least 15% less time “Biosim”. The efficiency increase with the complexity of than the original the model. More complex the model, more efficient the proposed system is than original “Biosim”.Depending on the complexity of a model, the proposed system can be 56.53 % faster than the original “Biosim”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Participation as observer at the meeting of Task 14 of IEA's Solar Heating and Cooling Projects held in Hameln, Germany has led to greater understanding of interesting developments underway in several countries. This will be of use during the development of small scale systems suitable for Swedish conditions. A summary of the work carried out by the working groups within Task 14 is given, with emphasis on the Domestic Hot Water group. Experiences of low-flow systems from several countries are related, and the conclusion is drawn that the maximum theoretical possible increase in performance of 20% has not been achieved due to poor heat exchangers and poor stratification in the storage tanks. Positive developments in connecting tubes and pumps is noted. Further participation as observer in Task 14 meetings is desired, and is looked on favourably by the members of the group. Another conclusion is that SERC should carry on with work on Swedish storage tanks, with emphasis on better stratification and heat exchangers, and possible modelling of system components. Finally a German Do-it-Vourself kit is described and judged in comparison with prefabricated models and Swedish Do-it-Yourself kits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CMS is a general purpose experiment, designed to study the physics of pp collisions at 14 TeV at the Large Hadron Collider ( LHC). It currently involves more than 2000 physicists from more than 150 institutes and 37 countries. The LHC will provide extraordinary opportunities for particle physics based on its unprecedented collision energy and luminosity when it begins operation in 2007. The principal aim of this report is to present the strategy of CMS to explore the rich physics programme offered by the LHC. This volume demonstrates the physics capability of the CMS experiment. The prime goals of CMS are to explore physics at the TeV scale and to study the mechanism of electroweak symmetry breaking - through the discovery of the Higgs particle or otherwise. To carry out this task, CMS must be prepared to search for new particles, such as the Higgs boson or supersymmetric partners of the Standard Model particles, from the start- up of the LHC since new physics at the TeV scale may manifest itself with modest data samples of the order of a few fb(-1) or less. The analysis tools that have been developed are applied to study in great detail and with all the methodology of performing an analysis on CMS data specific benchmark processes upon which to gauge the performance of CMS. These processes cover several Higgs boson decay channels, the production and decay of new particles such as Z' and supersymmetric particles, B-s production and processes in heavy ion collisions. The simulation of these benchmark processes includes subtle effects such as possible detector miscalibration and misalignment. Besides these benchmark processes, the physics reach of CMS is studied for a large number of signatures arising in the Standard Model and also in theories beyond the Standard Model for integrated luminosities ranging from 1 fb(-1) to 30 fb(-1). The Standard Model processes include QCD, B-physics, diffraction, detailed studies of the top quark properties, and electroweak physics topics such as the W and Z(0) boson properties. The production and decay of the Higgs particle is studied for many observable decays, and the precision with which the Higgs boson properties can be derived is determined. About ten different supersymmetry benchmark points are analysed using full simulation. The CMS discovery reach is evaluated in the SUSY parameter space covering a large variety of decay signatures. Furthermore, the discovery reach for a plethora of alternative models for new physics is explored, notably extra dimensions, new vector boson high mass states, little Higgs models, technicolour and others. Methods to discriminate between models have been investigated. This report is organized as follows. Chapter 1, the Introduction, describes the context of this document. Chapters 2-6 describe examples of full analyses, with photons, electrons, muons, jets, missing E-T, B-mesons and tau's, and for quarkonia in heavy ion collisions. Chapters 7-15 describe the physics reach for Standard Model processes, Higgs discovery and searches for new physics beyond the Standard Model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of rhythmic fluctuations cf physical and mental variables should be of special significance for the understanding of students' performance and setting the schedules of school activities. The present study investigated the pattern of diurnal variation in oral temperature, sleepiness and performance of a group of adolescents undergoing a daytime school schedule. Eighteen girls (mean age 16 years-old), who attended the same class from 0715h to 1645h, were tested on seven days. They measured their oral temperature, quantified their sleepiness level by means of a visual analogue scale, and completed the following tests: letter cancellation test, addition test, and a simple motor task. One-way ANOVA statistics for repeated measures was used in order to verify the effect of test time on oral temperature,sleepiness and performance. Possible correlations between the level of sleepiness and performance were investigated by means of Spearman rank correlation. The results revealed significant time of day effect cn all variables, except for the number of addition errors. Oral temperature values showed an increase from morning to afternoon. Letter cancellation, motor task and addition scores increased from early morning to late afternoon, showing rapid fluctuations throughout the day. Sleepiness level was negatively correlated with letter cancellation scores during the first three tests of the day. In agreement with other work, the diurnal variation of oral temperature, letter cancellation and addition test showed an improvement as the day progressed. Sleepiness, on the other hand, decreased throughout the day, with the highest level associated with the first test of the day, suggesting a circadian pattern of variation rather than a cumulative effect due to school activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature shows conflicting results regarding older adults' (OA) postural control performance. Differing task demands amongst scientific studies may contribute to such ambiguous results. Therefore, the purpose of this study was to examine the performance of postural control in older adults and the relationship between visual information and body sway as a function of task demands. Old and young adults (YA) maintained an upright stance on different bases of support (normal, tandem and reduced), both with and without vision, and both with and without room movement. In the more demanding tasks, the older adults displayed greater body sway than the younger adults and older adults were more influenced by the manipulation of the visual information due to the room movement. However, in the normal support condition, the influence of the moving room was similar for the two groups. These results suggest that task demand is an important aspect to consider when examining postural control in older adults. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processing efficiency theory predicts that anxiety reduces the processing capacity of working memory and has detrimental effects on performance. When tasks place little demand on working memory, the negative effects of anxiety can be avoided by increasing effort. Although performance efficiency decreases, there is no change in performance effectiveness. When tasks impose a heavy demand on working memory, however, anxiety leads to decrements in efficiency and effectiveness. These presumptions were tested using a modified table tennis task that placed low (LWM) and high (HWM) demands on working memory. Cognitive anxiety was manipulated through a competitive ranking structure and prize money. Participants' accuracy in hitting concentric circle targets in predetermined sequences was taken as a measure of performance effectiveness, while probe reaction time (PRT), perceived mental effort (RSME), visual search data, and arm kinematics were recorded as measures of efficiency. Anxiety had a negative effect on performance effectiveness in both LWM and HWM tasks. There was an increase in frequency of gaze and in PRT and RSME values in both tasks under high vs. low anxiety conditions, implying decrements in performance efficiency. However, participants spent more time tracking the ball in the HWM task and employed a shorter tau margin when anxious. Although anxiety impaired performance effectiveness and efficiency, decrements in efficiency were more pronounced in the HWM task than in the LWM task, providing support for processing efficiency theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A "second generation" matching-to-sample procedure that minimizes past sources of artifacts involves (1) successive discrimination between sample stimuli, (2) stimulus displays ranging from four to 16 comparisons, (3) variable stimulus locations to avoid unwanted stimulus-location control, and (4) high accuracy levels (e.g., 90% correct on a 16-choice task in which chance accuracy is 6%). Examples of behavioral engineering with experienced capuchin monkeys included four-choice matching problems with video images of monkeys with substantially above-chance matching in a single session and 90% matching within six sessions. Exclusion performance was demonstrated by interspersing non-identical sample-comparison pairs within a baseline of a nine-comparison identity-matching-to-sample procedure with pictures as stimuli. The test for exclusion presented the newly "mapped" stimulus in a situation in which exclusion was not possible. Degradation of matching between physically non-identical forms occurred while baseline identity accuracy was sustained at high levels, thus confirming that Cebus cf. apella is capable of exclusion. Additionally, exclusion performance when baseline matching relations involved non-identical stimuli was shown.