929 resultados para Energy Performance of Buildings


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the decade of the 1990s, China’s feed sector became increasingly privatized, more feed mills opened, and the scale of operation expanded. Capacity utilization remained low and multi-ministerial supervision was still prevalent, but the feed mill sector showed a positive performance overall, posting a growth rate of 11 percent per year. Profit margin over sales was within allowable rates set by the government of China at 3 to 5 percent. Financial efficiency improved, with a 20 percent quicker turnover of working capital. Average technical efficiency was 0.805, as more efficient feed mills increasingly gained production shares. This study finds evidence that the increasing privatization explains the improved performance of the commercial feed mill sector. The drivers that shaped the feed mill sector in the 1990s have changed with China’s accession to the World Trade Organization. With the new policy regime in place, the study foresees that, assuming an adequate supply of soy meal and an excess capacity in the feed mill sector, it is likely that China will allow corn imports up to the tariff rate quota (TRQ) of 7.2 mmt since the in-quota rate is very low at 1 percent. However, when the TRQ is exceeded, the import duty jumps to a prohibitive out-quota rate of 65 percent. With an import duty for meat of only 10 to 12 percent, China would have a strong incentive to import meat products directly rather than bringing in expensive corn to produce meat domestically. This would be further reinforced if structural transformation in the swine sector would narrow the cost differential between domestic and imported pork.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To determine and compare the diagnostic performance of magnetic resonance imaging (MRI) and computed tomography (CT) for the diagnosis of tumor extent in advanced retinoblastoma, using histopathologic analysis as the reference standard. DESIGN: Systematic review and meta-analysis. PARTICIPANTS: Patients with advanced retinoblastoma who underwent MRI, CT, or both for the detection of tumor extent from published diagnostic accuracy studies. METHODS: Medline and Embase were searched for literature published through April 2013 assessing the diagnostic performance of MRI, CT, or both in detecting intraorbital and extraorbital tumor extension of retinoblastoma. Diagnostic accuracy data were extracted from included studies. Summary estimates were based on a random effects model. Intrastudy and interstudy heterogeneity were analyzed. MAIN OUTCOME MEASURES: Sensitivity and specificity of MRI and CT in detecting tumor extent. RESULTS: Data of the following tumor-extent parameters were extracted: anterior eye segment involvement and ciliary body, optic nerve, choroidal, and (extra)scleral invasion. Articles on MRI reported results of 591 eyes from 14 studies, and articles on CT yielded 257 eyes from 4 studies. The summary estimates with their 95% confidence intervals (CIs) of the diagnostic accuracy of conventional MRI at detecting postlaminar optic nerve, choroidal, and scleral invasion showed sensitivities of 59% (95% CI, 37%-78%), 74% (95% CI, 52%-88%), and 88% (95% CI, 20%-100%), respectively, and specificities of 94% (95% CI, 84%-98%), 72% (95% CI, 31%-94%), and 99% (95% CI, 86%-100%), respectively. Magnetic resonance imaging with a high (versus a low) image quality showed higher diagnostic accuracies for detection of prelaminar optic nerve and choroidal invasion, but these differences were not statistically significant. Studies reporting the diagnostic accuracy of CT did not provide enough data to perform any meta-analyses. CONCLUSIONS: Magnetic resonance imaging is an important diagnostic tool for the detection of local tumor extent in advanced retinoblastoma, although its diagnostic accuracy shows room for improvement, especially with regard to sensitivity. With only a few-mostly old-studies, there is very little evidence on the diagnostic accuracy of CT, and generally these studies show low diagnostic accuracy. Future studies assessing the role of MRI in clinical decision making in terms of prognostic value for advanced retinoblastoma are needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to assess the diagnostic potential of urinary metanephrines and 3-methoxytyramine compared to urinary catecholamine determination in diagnosing antemortem cold exposure and fatal hypothermia. 83 cases of fatal hypothermia and 144 control cases were included in this study. Catecholamines (adrenaline, noradrenaline and dopamine), metanephrines (metanephrine, normetanephrine) and 3-methoxytyramine were measured in urine collected during autopsy. All tested analytes were significantly higher in hypothermia cases compared to control subjects and displayed a generally satisfying discriminative value, thus indicating urinary catecholamines and their metabolites as reliable markers of cold-related stress and hypothermia related-deaths. Metanephrine and adrenaline had the best discriminative value between hypothermia and control cases compared to other tested analytes, though with different sensitivity and specificity. These can therefore be considered the most suitable markers of cold-related stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neuromotor functioning - i.e., timed performance and quality of movements - was examined in 66 left-handed children and adolescents between 5 and 18.5 years by means of the Zurich Neuromotor Assessment. Quality of movements was assessed by the degree and the frequency of associated movements. Results were compared to normative data from 593 right-handers. The overall scores for timed motor performance were similar for left-handers and right-handers, while left-handers had more associated movements than right-handers with both sides. In agreement with previous studies in adults, we found that left-handed children were less lateralized than right-handers. They performed faster with their non-dominant side and slower with their dominant side. This finding was roughly independent of age, which may indicate that handedness does not reflect long-term effects of previous motor experience, but may be primarily attributed to genetic factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The aim of this study was to conduct a systematic review and perform a meta-analysis on the diagnostic performances of (18)F-fluorodeoxyglucose positron emission tomography (FDG PET) for giant cell arteritis (GCA), with or without polymyalgia rheumatica (PMR). METHODS: MEDLINE, Embase and the Cochrane Library were searched for articles in English that evaluated FDG PET in GCA or PMR. All complete studies were reviewed and qualitatively analysed. Studies that fulfilled the three following criteria were included in a meta-analysis: (1) FDG PET used as a diagnostic tool for GCA and PMR; (2) American College of Rheumatology and Healey criteria used as the reference standard for the diagnosis of GCA and PMR, respectively; and (3) the use of a control group. RESULTS: We found 14 complete articles. A smooth linear or long segmental pattern of FDG uptake in the aorta and its main branches seems to be a characteristic pattern of GCA. Vessel uptake that was superior to liver uptake was considered an efficient marker for vasculitis. The meta-analysis of six selected studies (101 vasculitis and 182 controls) provided the following results: sensitivity 0.80 [95% confidence interval (CI) 0.63-0.91], specificity 0.89 (95% CI 0.78-0.94), positive predictive value 0.85 (95% CI 0.62-0.95), negative predictive value 0.88 (95% CI 0.72-0.95), positive likelihood ratio 6.73 (95% CI 3.55-12.77), negative likelihood ratio 0.25 (95% CI 0.13-0.46) and accuracy 0.84 (95% CI 0.76-0.90). CONCLUSION: We found overall valuable diagnostic performances for FDG PET against reference criteria. Standardized FDG uptake criteria are needed to optimize these diagnostic performances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Secondary accident statistics can be useful for studying the impact of traffic incident management strategies. An easy-to-implement methodology is presented for classifying secondary accidents using data fusion of a police accident database with intranet incident reports. A current method for classifying secondary accidents uses a static threshold that represents the spatial and temporal region of influence of the primary accident, such as two miles and one hour. An accident is considered secondary if it occurs upstream from the primary accident and is within the duration and queue of the primary accident. However, using the static threshold may result in both false positives and negatives because accident queues are constantly varying. The methodology presented in this report seeks to improve upon this existing method by making the threshold dynamic. An incident progression curve is used to mark the end of the queue throughout the entire incident. Four steps in the development of incident progression curves are described. Step one is the processing of intranet incident reports. Step two is the filling in of incomplete incident reports. Step three is the nonlinear regression of incident progression curves. Step four is the merging of individual incident progression curves into one master curve. To illustrate this methodology, 5,514 accidents from Missouri freeways were analyzed. The results show that secondary accidents identified by dynamic versus static thresholds can differ by more than 30%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expected utility theory (EUT) has been challenged as a descriptive theoryin many contexts. The medical decision analysis context is not an exception.Several researchers have suggested that rank dependent utility theory (RDUT)may accurately describe how people evaluate alternative medical treatments.Recent research in this domain has addressed a relevant feature of RDU models-probability weighting-but to date no direct test of this theoryhas been made. This paper provides a test of the main axiomatic differencebetween EUT and RDUT when health profiles are used as outcomes of riskytreatments. Overall, EU best described the data. However, evidence on theediting and cancellation operation hypothesized in Prospect Theory andCumulative Prospect Theory was apparent in our study. we found that RDUoutperformed EU in the presentation of the risky treatment pairs in whichthe common outcome was not obvious. The influence of framing effects onthe performance of RDU and their importance as a topic for future researchis discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field, immature individuals of Ascia monuste orseis (Godart), the kale caterpillars, migrate in great proportion to other regions of the host in order to complete their development; there, they find leaves of different ages and are exposed to the nutritional variation of these leaves. The objective of this study was to find out how the change to leaves of different ages affects the A. monuste orseis performance. The experiments were carried out providing one kind of leaf during the three first instars, and afterwards providing leaves of different ages during the fourth and fifth instars, since it is in these two instars that the changing movement prevails in that species. The parameters to measure performance were time of development (both to complete the three first instars and the fourth and fifth instars), ingestion of food, incorporated biomass, digestive indices that evaluated efficiency in food utilization, relative growth and intake rates, percentage of emergence, weight and size of the adults. In general, the caterpillars which were first fed on new leaves presented a better performance, but this study concluded that the A. monuste orseis caterpillars have shown skills to compensate food with lower nutritional value or less abundant in nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.