870 resultados para purchase confidence


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To investigate potentially dissociable recognition memory responses in the hippocampus and perirhinal cortex, fMRI studies have often used confidence ratings as an index of memory strength. Confidence ratings, although correlated with memory strength, also reflect sources of variability, including task-irrelevant item effects and differences both within and across individuals in terms of applying decision criteria to separate weak from strong memories. We presented words one, two, or four times at study in each of two different conditions, focused and divided attention, and then conducted separate fMRI analyses of correct old responses on the basis of subjective confidence ratings or estimates from single- versus dual-process recognition memory models. Overall, the effect of focussing attention on spaced repetitions at study manifested as enhanced recognition memory performance. Confidence- versus model-based analyses revealed disparate patterns of hippocampal and perirhinal cortex activity at both study and test and both within and across hemispheres. The failure to observe equivalent patterns of activity indicates that fMRI signals associated with subjective confidence ratings reflect additional sources of variability. The results are consistent with predictions of single-process models of recognition memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we use an experimental design to compare the performance of elicitation rules for subjective beliefs. Contrary to previous works in which elicited beliefs are compared to an objective benchmark, we consider a purely subjective belief framework (confidence in one’s own performance in a cognitive task and a perceptual task). The performance of different elicitation rules is assessed according to the accuracy of stated beliefs in predicting success. We measure this accuracy using two main factors: calibration and discrimination. For each of them, we propose two statistical indexes and we compare the rules’ performances for each measurement. The matching probability method provides more accurate beliefs in terms of discrimination, while the quadratic scoring rule reduces overconfidence and the free rule, a simple rule with no incentives, which succeeds in eliciting accurate beliefs. Nevertheless, the matching probability appears to be the best mechanism for eliciting beliefs due to its performances in terms of calibration and discrimination, but also its ability to elicit consistent beliefs across measures and across tasks, as well as its empirical and theoretical properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Insights into the relative importance of various aspects of product quality can be provided through quantitative analysis of consumer preference and choice of fruit. In this study, methods previously used to establish taste preferences for kiwifruit (Harker et al., 2008) and conjoint approaches were used to determine the influence of three key aspects of avocado quality on consumer liking and willingness to purchase fruit: dry matter percentage (DM), level of ripeness (firmness) and internal defects (bruising). One hundred and seven consumers tasted avocados with a range of DM levels from ~20% (minimally mature) to nearly 40% (very mature), and at a range of fruit firmness (ripeness) stages (firm-ripe to soft-ripe). Responses to bruising, a common quality defect in fruit obtained from the retail shelf, were examined using a conjoint approach in which consumers were presented with photographs showing fruit affected by damage of varying severity. In terms of DM, consumers showed a progressive increase in liking and intent to buy avocados as the DM increased. In terms of ripeness, liking and purchase intent was higher in avocados that had softened to a firmness of 6.5 N or below (hand-rating 5). For internal defects, conjoint analysis revealed that price, level of bruising and incidence of bruising all significantly lowered consumers' future purchase decision, but the latter two factors had a greater impact than price. These results indicate the usefulness of the methodology, and also provide realistic targets for Hass avocado quality on the retail shelf.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful identification of these factors influence upon TFS will empower stakeholders to make informed decisions as to how to best utilise the resource, boost consumer confidence thus ensuring the improved profitability of the fishery into the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital image

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows the extraordinary capacity of yield spreads to anticipate consumption growth as proxy by the Economic Sentiment Indicator elaborated by the European Commission in order to predict turning points in business cycles. This new evidence complements the well known results regarding the usefulness of the slope of the term structure of interest rates to predict real economic conditions and, in particular, recessions by using a direct measure of expectations. A linear combination of European yield spreads explains a surprising 93.7% of the variability of the Economic Sentiment Indicator. Yield spreads seem to be a key determinant of consumer confidence in Europe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This technical memorandum documents the design, implementation, data preparation, and descriptive results for the 2006 Annual Economic Survey of Federal Gulf Shrimp Permit Holders. The data collection was designed by the NOAA Fisheries Southeast Fisheries Science Center Social Science Research Group to track the financial and economic status and performance by vessels holding a federal moratorium permit for harvesting shrimp in the Gulf of Mexico. A two page, self-administered mail survey collected total annual costs broken out into seven categories and auxiliary economic data. In May 2007, 580 vessels were randomly selected, stratified by state, from a preliminary population of 1,709 vessels with federal permits to shrimp in offshore waters of the Gulf of Mexico. The survey was implemented during the rest of 2007. After many reminder and verification phone calls, 509 surveys were deemed complete, for an ineligibility-adjusted response rate of 90.7%. The linking of each individual vessel’s cost data to its revenue data from a different data collection was imperfect, and hence the final number of observations used in the analyses is 484. Based on various measures and tests of validity throughout the technical memorandum, the quality of the data is high. The results are presented in a standardized table format, linking vessel characteristics and operations to simple balance sheet, cash flow, and income statements. In the text, results are discussed for the total fleet, the Gulf shrimp fleet, the active Gulf shrimp fleet, and the inactive Gulf shrimp fleet. Additional results for shrimp vessels grouped by state, by vessel characteristics, by landings volume, and by ownership structure are available in the appendices. The general conclusion of this report is that the financial and economic situation is bleak for the average vessels in most of the categories that were evaluated. With few exceptions, cash flow for the average vessel is positive while the net revenue from operations and the “profit” are negative. With negative net revenue from operations, the economic return for average shrimp vessels is less than zero. Only with the help of government payments does the average owner just about break even. In the short-term, this will discourage any new investments in the industry. The financial situation in 2006, especially if it endures over multiple years, also is economically unsustainable for the average established business. Vessels in the active and inactive Gulf shrimp fleet are, on average, 69 feet long, weigh 105 gross tons, are powered by 505 hp motor(s), and are 23 years old. Three-quarters of the vessels have steel hulls and 59% use a freezer for refrigeration. The average market value of these vessels was $175,149 in 2006, about a hundred-thousand dollars less than the average original purchase price. The outstanding loans averaged $91,955, leading to an average owner equity of $83,194. Based on the sample, 85% of the federally permitted Gulf shrimp fleet was actively shrimping in 2006. Of these 386 active Gulf shrimp vessels, just under half (46%) were owner-operated. On average, these vessels burned 52,931 gallons of fuel, landed 101,268 pounds of shrimp, and received $2.47 per pound of shrimp. Non-shrimp landings added less than 1% to cash flow, indicating that the federal Gulf shrimp fishery is very specialized. The average total cash outflow was $243,415 of which $108,775 was due to fuel expenses alone. The expenses for hired crew and captains were on average $54,866 which indicates the importance of the industry as a source of wage income. The resulting average net cash flow is $16,225 but has a large standard deviation. For the population of active Gulf shrimp vessels we can state with 95% certainty that the average net cash flow was between $9,500 and $23,000 in 2006. The median net cash flow was $11,843. Based on the income statement for active Gulf shrimp vessels, the average fixed costs accounted for just under a quarter of operating expenses (23.1%), labor costs for just over a quarter (25.3%), and the non-labor variable costs for just over half (51.6%). The fuel costs alone accounted for 42.9% of total operating expenses in 2006. It should be noted that the labor cost category in the income statement includes both the actual cash payments to hired labor and an estimate of the opportunity cost of owner-operators’ time spent as captain. The average labor contribution (as captain) of an owner-operator is estimated at about $19,800. The average net revenue from operations is negative $7,429, and is statistically different and less than zero in spite of a large standard deviation. The economic return to Gulf shrimping is negative 4%. Including non-operating activities, foremost an average government payment of $13,662, leads to an average loss before taxes of $907 for the vessel owners. The confidence interval of this value straddles zero, so we cannot reject, with 95% certainty, that the population average is zero. The average inactive Gulf shrimp vessel is generally of a smaller scale than the average active vessel. Inactive vessels are physically smaller, are valued much lower, and are less dependent on loans. Fixed costs account for nearly three quarters of the total operating expenses of $11,926, and only 6% of these vessels have hull insurance. With an average net cash flow of negative $7,537, the inactive Gulf shrimp fleet has a major liquidity problem. On average, net revenue from operations is negative $11,396, which amounts to a negative 15% economic return, and owners lose $9,381 on their vessels before taxes. To sustain such losses and especially to survive the negative cash flow, many of the owners must be subsidizing their shrimp vessels with the help of other income or wealth sources or are drawing down their equity. Active Gulf shrimp vessels in all states but Texas exhibited negative returns. The Alabama and Mississippi fleets have the highest assets (vessel values), on average, yet they generate zero cash flow and negative $32,224 net revenue from operations. Due to their high (loan) leverage ratio the negative 11% economic return is amplified into a negative 21% return on equity. In contrast, for Texas vessels, which actually have the highest leverage ratio among the states, a 1% economic return is amplified into a 13% return on equity. From a financial perspective, the average Florida and Louisiana vessels conform roughly to the overall average of the active Gulf shrimp fleet. It should be noted that these results are averages and hence hide the variation that clearly exists within all fleets and all categories. Although the financial situation for the average vessel is bleak, some vessels are profitable. (PDF contains 101 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of self-contained, low-maintenance sensor systems installed on commercial vessels is becoming an important monitoring and scientific tool in many regions around the world. These systems integrate data from meteorological and water quality sensors with GPS data into a data stream that is automatically transferred from ship to shore. To begin linking some of this developing expertise, the Alliance for Coastal Technologies (ACT) and the European Coastal and Ocean Observing Technology (ECOOT) organized a workshop on this topic in Southampton, United Kingdom, October 10-12, 2006. The participants included technology users, technology developers, and shipping representatives. They collaborated to identify sensors currently employed on integrated systems, users of this data, limitations associated with these systems, and ways to overcome these limitations. The group also identified additional technologies that could be employed on future systems and examined whether standard architectures and data protocols for integrated systems should be established. Participants at the workshop defined 17 different parameters currently being measured by integrated systems. They identified that diverse user groups utilize information from these systems from resource management agencies, such as the Environmental Protection Agency (EPA), to local tourism groups and educational organizations. Among the limitations identified were instrument compatibility and interoperability, data quality control and quality assurance, and sensor calibration andlor maintenance frequency. Standardization of these integrated systems was viewed to be both advantageous and disadvantageous; while participants believed that standardization could be beneficial on many levels, they also felt that users may be hesitant to purchase a suite of instruments from a single manufacturer; and that a "plug and play" system including sensors from multiple manufactures may be difficult to achieve. A priority recommendation and conclusion for the general integrated sensor system community was to provide vessel operators with real-time access to relevant data (e.g., ambient temperature and salinity to increase efficiency of water treatment systems and meteorological data for increased vessel safety and operating efficiency) for broader system value. Simplified data displays are also required for education and public outreach/awareness. Other key recommendations were to encourage the use of integrated sensor packages within observing systems such as 100s and EuroGOOS, identify additional customers of sensor system data, and publish results of previous work in peer-reviewed journals to increase agency and scientific awareness and confidence in the technology. Priority recommendations and conclusions for ACT entailed highlighting the value of integrated sensor systems for vessels of opportunity through articles in the popular press, and marine science. [PDF contains 28 pages]