939 resultados para Measurement error models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 94A29, 94B70

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incorporating Material Balance Principle (MBP) in industrial and agricultural performance measurement systems with pollutant factors has been on the rise in recent years. Many conventional methods of performance measurement have proven incompatible with the material flow conditions. This study will address the issue of eco-efficiency measurement adjusted for pollution, taking into account materials flow conditions and the MBP requirements, in order to provide ‘real’ measures of performance that can serve as guides when making policies. We develop a new approach by integrating slacks-based measure to enhance the Malmquist Luenberger Index by a material balance condition that reflects the conservation of matter. This model is compared with a similar model, which incorporates MBP using the trade-off approach to measure productivity and eco-efficiency trends of power plants. Results reveal similar findings for both models substantiating robustness and applicability of the proposed model in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The single spin asymmetry, ALT ′, and the polarized structure function, σ LT′, for the p( e&ar; , e′K +)Λ reaction in the resonance region have been measured and extracted using the CEBAF Large Acceptance Spectrometer (CLAS) at Jefferson Lab. Data were taken at an electron beam energy of 2.567 GeV. The large acceptance of CLAS allows for full azimuthal angle coverage over a large range of center-of-mass scattering angles. Results were obtained that span a range in Q 2 from 0.5 to 1.3 GeV2 and W from threshold up to 2.1 GeV and were compared to existing theoretical calculations. The polarized structure function is sensitive to the interferences between various resonant amplitudes, as well as to resonant and non-resonant amplitudes. This measurement is essential for understanding the structure of nucleons and searching for previously undetected nucleon excited states (resonances) predicted by quark models. The W dependence of the σ LT′ in the kinematic regions dominated by s and u channel exchange (cos qcmk = −0.50, −0.167, 0.167) indicated possible resonance structures not predicted by theoretical calculations. The σLT ′ behavior around W = 1.875 GeV could be the signature of a resonance predicted by the quark models and possibly seen in photoproduction. In the very forward angles where the reaction is dominated by the t-channel, the average σLT ′ was zero. There was no indication of the interference between resonances or resonant and non-resonant amplitudes. This might be indicating the dominance of a single t-channel exchange. Study of the sensitivity of the fifth structure function data to the resonance around 1900 MeV showed that these data were highly sensitive to the various assumptions of the models for the quantum number of this resonance. This project was part of a larger CLAS program to measure cross sections and polarization observables for kaon electroproduction in the nucleon resonance region. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid globalization and integration of world capital markets, more and more stocks are listed in multiple markets. With multi-listed stocks, the traditional measurement of systematic risk, the domestic beta, is not appropriate since it only contain information from one market. ^ Prakash et al. (1993) developed a technique, the global beta, to capture information from multiple markets wherein the stocks are listed. In this study, the global betas are obtained as well as domestic betas for 704 multi-listed stocks from 59 world equity markets. Welch tests show that domestic betas are not equal across markets, therefore, global beta is more appropriate in a global investment setting. ^ The traditional Capital Asset Pricing Models (CAPM) is also tested with regards to both domestic beta and global beta. The results generally support the positive relationship between stocks returns and global beta while tend to reject this relationship between stocks returns and domestic beta. Further tests of International CAPM with domestic beta and global beta strengthen the conclusion.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CLAS Collaboration is using the p(e, e&feet; K+ p)π- reaction to perform a measurement of the induced polarization of the electroproduced Λ(1116). The parity-violating weak decay of the Λ into pπ- (64%) allows extraction of the recoil polarization of the Λ. The present study uses the CEBAF Large Acceptance Spectrometer (CLAS) to detect the scattered electron, the kaon, and the decay proton. CLAS allows for a large kinematic acceptance in Q2 (0.8 ≤ Q2 ≤ 3.5 GeV2 ), W (1.6 ≤ W ≤ 3.0 GeV), as well as the kaon scattering angle. In this experiment a 5.499 GeV electron beam was incident upon an unpolarized liquid-hydrogen target. The goal is to map out the kinematic dependencies for this polarization observable to provide new constraints for theoretical models of the electromagnetic production of kaon-hyperon final states. Along with previously published photo- and electroproduction cross sections and polarization observables from CLAS, SAPHIR, and GRAAL, these data are needed in a coupled-channel analysis to identify previously unobserved s-channel resonances.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-photon exchange phenomenon is believed to be responsible for the discrepancy observed between the ratio of proton electric and magnetic form factors, measured by the Rosenbluth and polarization transfer methods. This disagreement is about a factor of three at Q 2 of 5.6 GeV2. The precise knowledge of the proton form factors is of critical importance in understanding the structure of this nucleon. The theoretical models that estimate the size of the two-photon exchange (TPE) radiative correction are poorly constrained. This factor was found to be directly measurable by taking the ratio of the electron-proton and positron-proton elastic scattering cross sections, as the TPE effect changes sign with respect to the charge of the incident particle. A test run of a modified beamline has been conducted with the CEBAF Large Acceptance Spectrometer (CLAS) at Thomas Jefferson National Accelerator Facility. This test run demonstrated the feasibility of producing a mixed electron/positron beam of good quality. Extensive simulations performed prior to the run were used to reduce the background rate that limits the production luminosity. A 3.3 GeV primary electron beam was used that resulted in an average secondary lepton beam of 1 GeV. As a result, the elastic scattering data of both lepton types were obtained at scattering angles up to 40 degrees for Q2 up to 1.5 GeV2. The cross section ratio displayed an &epsis; dependence that was Q2 dependent at smaller Q2 limits. The magnitude of the average ratio as a function of &epsis; was consistent with the previous measurements, and the elastic (Blunden) model to within the experimental uncertainties. Ultimately, higher luminosity is needed to extend the data range to lower &epsis; where the TPE effect is predicted to be largest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microelectronic systems are multi-material, multi-layer structures, fabricated and exposed to environmental stresses over a wide range of temperatures. Thermal and residual stresses created by thermal mismatches in films and interconnections are a major cause of failure in microelectronic devices. Due to new device materials, increasing die size and the introduction of new materials for enhanced thermal management, differences in thermal expansions of various packaging materials have become exceedingly important and can no longer be neglected. X-ray diffraction is an analytical method using a monochromatic characteristic X-ray beam to characterize the crystal structure of various materials, by measuring the distances between planes in atomic crystalline lattice structures. As a material is strained, this interplanar spacing is correspondingly altered, and this microscopic strain is used to determine the macroscopic strain. This thesis investigates and describes the theory and implementation of X-ray diffraction in the measurement of residual thermal strains. The design of a computer controlled stress attachment stage fully compatible with an Anton Paar heat stage will be detailed. The stress determined by the diffraction method will be compared with bimetallic strip theory and finite element models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sedimentary sections of three cores from the Celtic margin provide high-resolution records of the terrigenous fluxes during the last glacial cycle. A total of 21 14C AMS dates allow us to define age models with a resolution better than 100 yr during critical periods such as Heinrich events 1 and 2. Maximum sedimentary fluxes occurred at the Meriadzek Terrace site during the Last Glacial Maximum (LGM). Detailed X-ray imagery of core MD95-2002 from the Meriadzek Terrace shows no sedimentary structures suggestive of either deposition from high-density turbidity currents or significant erosion. Two paroxysmal terrigenous flux episodes have been identified. The first occurred after the deposition of Heinrich event 2 Canadian ice-rafted debris (IRD) and includes IRD from European sources. We suggest that the second represents an episode of deposition from turbid plumes, which precedes IRD deposition associated with Heinrich event 1. At the end of marine isotopic stage 2 (MIS 2) and the beginning of MIS 1 the highest fluxes are recorded on the Whittard Ridge where they correspond to deposition from turbidity current overflows. Canadian icebergs have rafted debris at the Celtic margin during Heinrich events 1, 2, 4 and 5. The high-resolution records of Heinrich events 1 and 2 show that in both cases the arrival of the Canadian icebergs was preceded by a European ice rafting precursor event, which took place about 1-1.5 kyr before. Two rafting episodes of European IRD also occurred immediately after Heinrich event 2 and just before Heinrich event 1. The terrigenous fluxes recorded in core MD95-2002 during the LGM are the highest reported at hemipelagic sites from the northwestern European margin. The magnitude of the Canadian IRD fluxes at Meriadzek Terrace is similar to those from oceanic sites.