792 resultados para communication performance evaluation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Key performance features of a miniature laser ablation time-of-flight mass spectrometer designed for in situ investigations of the chemical composition of planetary surfaces are presented. This mass spectrometer is well suited for elemental and isotopic analysis of raw solid materials with high sensitivity and high spatial resolution. In this study, ultraviolet laser radiation with irradiances suitable for ablation (< 1 GW/cm2) is used to achieve stable ion formation and low sample consumption. In comparison to our previous laser ablation studies at infrared wavelengths, several improvements to the experimental setup have been made, which allow accurate control over the experimental conditions and good reproducibility of measurements. Current performance evaluations indicate significant improvements to several instrumental figures of merit. Calibration of the mass scale is performed within a mass accuracy (Δm/m) in the range of 100 ppm, and a typical mass resolution (m/Δm) ~600 is achieved at the lead mass peaks. At lower laser irradiances, the mass resolution is better, about (m/Δm) ~900 for lead, and limited by the laser pulse duration of 3 ns. The effective dynamic range of the instrument was enhanced from about 6 decades determined in previous study up to more than 8 decades at present. Current studies show high sensitivity in detection of both metallic and non-metallic elements. Their abundance down to tens of ppb can be measured together with their isotopic patterns. Due to strict control of the experimental parameters, e.g. laser characteristics, ion-optical parameters and sample position, by computer control, measurements can be performed with high reproducibility. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Budgets are often simultaneously used for the conflicting purposes of planning and performance evaluation. While economic theory suggests that firms should use separate budgets for conflicting purposes this contrasts with existing evidence that firms rarely do so. We address two open questions related to these observations in an experiment. Specifically, we investigate how a planning task that is in conflict with the performance evaluation task affects behavior in budget negotiations and their outcomes. Additionally, we analyze whether a single budget can be effectively used for both purposes compared to two separate budgets. We develop theory to predict that adding a planning task that is in conflict with the superior’s performance evaluation task increases the subordinate’s cooperation in and after the negotiation of a performance evaluation budget. Moreover, we predict that subordinate cooperation increases even more when the superior is restricted to use a single budget for both purposes. Our results broadly support our hypotheses. Specifically, we find that when budgets are used for both planning and performance evaluation, this increases the subordinate’s budget proposals during the negotiation and his performance after the negotiation. These effects tend to be even larger when the superior is restricted to a single budget rather than separate budgets for planning and performance evaluation, particularly with respect to subordinate performance. In our experimental setting, the benefits of increased subordinate cooperation even more than offset the loss in flexibility from the superior’s restriction to a single budget. The results of this study add to the understanding of the interdependencies of conflicting budgeting purposes and contribute to explain why firms often use a single budget for multiple purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous multicast research often makes commonly accepted but unverifed assumptions on network topologies and group member distribution in simulation studies. In this paper, we propose a framework to systematically evaluate multicast performance for different protocols. We identify a series of metrics, and carry out extensive simulation studies on these metrics with different topological models and group member distributions for three case studies. Our simulation results indicate that realistic topology and group membership models are crucial to accurate multicast performance evaluation. These results can provide guidance for multicast researchers to perform realistic simulations, and facilitate the design and development of multicast protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dua and Miller (1996) created leading and coincident employment indexes for the state of Connecticut, following Moore's (1981) work at the national level. The performance of the Dua-Miller indexes following the recession of the early 1990s fell short of expectations. This paper performs two tasks. First, it describes the process of revising the Connecticut Coincident and Leading Employment Indexes. Second, it analyzes the statistical properties and performance of the new indexes by comparing the lead profiles of the new and old indexes as well as their out-of-sample forecasting performance, using the Bayesian Vector Autoregressive (BVAR) method. The new indexes show improved performance in dating employment cycle chronologies. The lead profile test demonstrates that superiority in a rigorous, non-parametric statistic fashion. The mixed evidence on the BVAR forecasting experiments illustrates the truth in the Granger and Newbold (1986) caution that leading indexes properly predict cycle turning points and do not necessarily provide accurate forecasts except at turning points, a view that our results support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian adaptive randomization (BAR) is an attractive approach to allocate more patients to the putatively superior arm based on the interim data while maintains good statistical properties attributed to randomization. Under this approach, patients are adaptively assigned to a treatment group based on the probability that the treatment is better. The basic randomization scheme can be modified by introducing a tuning parameter, replacing the posterior estimated response probability, setting a boundary to randomization probabilities. Under randomization settings comprised of the above modifications, operating characteristics, including type I error, power, sample size, imbalance of sample size, interim success rate, and overall success rate, were evaluated through simulation. All randomization settings have low and comparable type I errors. Increasing tuning parameter decreases power, but increases imbalance of sample size and interim success rate. Compared with settings using the posterior probability, settings using the estimated response rates have higher power and overall success rate, but less imbalance of sample size and lower interim success rate. Bounded settings have higher power but less imbalance of sample size than unbounded settings. All settings have better performance in the Bayesian design than in the frequentist design. This simulation study provided practical guidance on the choice of how to implement the adaptive design. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both in industry and research, the quality control of micrometric manufactured parts is based on the measurement of parameters whose traceability is sometimes difficult to guarantee. In some of these parts, the confocal microscopy shows great aptitudes to characterize a measurand qualitatively and quantitatively. The confocal microscopy allows the acquisition of 2D and 3D images that are easily manipulated. Nowadays, this equipment is manufactured by many different brands, each of them claiming a resolution probably not in accord to their real performance. The Laser Center (Technical University of Madrid) has a confocal microscope to verify the dimensions of the micro mechanizing in their own research projects. The present study pretends to confirm that the magnitudes obtained are true and reliable. To achieve this, a methodology for confocal microscope calibration is proposed, as well as an experimental phase for dimensionally valuing the equipment by 4 different standard positions, with its seven magnifications and the six objective lenses that the equipment currently has, in the x–y and z axis. From the results the uncertainty will be estimated along with an effect analysis of the different magnifications in each of the objective lenses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a study and analysis of surface normal-base descriptors for 3D object recognition. Specifically, we evaluate the behaviour of descriptors in the recognition process using virtual models of objects created from CAD software. Later, we test them in real scenes using synthetic objects created with a 3D printer from the virtual models. In both cases, the same virtual models are used on the matching process to find similarity. The difference between both experiments is in the type of views used in the tests. Our analysis evaluates three subjects: the effectiveness of 3D descriptors depending on the viewpoint of camera, the geometry complexity of the model and the runtime used to do the recognition process and the success rate to recognize a view of object among the models saved in the database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.