982 resultados para Crash Test Criteria
Resumo:
With respect to future environmental impact assessment different bottom trawls are presently investigated for their suitability for sampling the fish fauna in potential windpark areas asses. For this purpose the international Baltic Sea Trawl, the Box trawl, two different small beam trawls and small eel fykes were compared on a research cutter in June 2002. The best catchability with the highest abundance of demersal fish species and widest length distribution was obtained with the Box trawl. During another cruise in December 2002 the Box trawl was again compared with the Windpark trawl. As a result the Box trawl performed best, with regard to the overall catchability. However the Windpark Trawl is easier to handle under the conditions onboard small fishing cutters.
Resumo:
Zebrafish (Danio rerio) embryos have been used to quantify the teratogenic potential of environmental samples and harmful substances respectively. The short spawning interval renders this species a good test organism in toxicological research. Due to the transparency of the eggs several lethal and non-lethal endpoints can be detected in parallel after 48 h of embryonic development. Zebrafishembryos have been shown to be sensitive to a number of environmental relevant contaminants, as well as to ex-tracts from polluted sediments
Resumo:
Background: Consensus development techniques were used in the late 1980s to create explicit criteria for the appropriateness of cataract extraction. We developed a new appropriateness of indications tool for cataract following the RAND method. We tested the validity of our panel results. Methods: Criteria were developed using a modified Delphi panel judgment process. A panel of 12 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the influence of all variables on the final panel score using linear and logistic regression models. The explicit criteria developed were summarized by classification and regression tree analysis. Results: Of the 765 indications evaluated by the main panel in the second round, 32.9% were found appropriate, 30.1% uncertain, and 37% inappropriate. Agreement was found in 53% of the indications and disagreement in 0.9%. Seven variables were considered to create the indications and divided into three groups: simple cataract, with diabetic retinopathy, or with other ocular pathologies. The preoperative visual acuity in the cataractous eye and visual function were the variables that best explained the panel scoring. The panel results were synthesized and presented in three decision trees. Misclassification error in the decision trees, as compared with the panel original criteria, was 5.3%. Conclusion: The parameters tested showed acceptable validity for an evaluation tool. These results support the use of this indication algorithm as a screening tool for assessing the appropriateness of cataract extraction in field studies and for the development of practice guidelines.
Resumo:
XVII, 352 p.
Resumo:
122 p.
Resumo:
190 p.
Resumo:
The paper is based on the premise that an understanding of the role and potentials of fish farming entrepreneurs in terms of their activities and expectations would enhance fish production and productivity. To this end, the present paper investigates the activities of 24 fish farmers in Anambra State, Nigeria. Based on the fish farmers' experience, the paper presents fish farming investments, budgets and recommendations that could ensure fish farming development as a profitable business venture and at the same time ensure greater fish production in the country
Resumo:
32 Proben verpackten Frischfischs in Selbstbedienungspackungen, 16 Seelachs- und 16 Rotbarschproben aus deutschen Supermärkten wurden mit physikalischen, chemischen, mikrobiellen und sensorischen Methoden untersucht. Ziel der Untersuchung war dieWertung von Untersuchungsmethoden zur Qualitätsbestimmung. Es zeigte sich, daß neben der sensorischen Beurteilung die Bestimmung des TVB-N geeignet ist, die Qualität dieser Produkte zu beurteilen.
Resumo:
Socioeconomic factors have long been incorporated into environmental research to examine the effects of human dimensions on coastal natural resources. Boyce (1994) proposed that inequality is a cause of environmental degradation and the Environmental Kuznets Curve is a proposed relationship that income or GDP per capita is related with initial increases in pollution followed by subsequent decreases (Torras and Boyce, 1998). To further examine this relationship within the CAMA counties, the emission of sulfur dioxide and nitrogen oxides, as measured by the EPA in terms of tons emitted, the Gini Coefficient, and income per capita were examined for the year of 1999. A quadratic regression was utilized and the results did not indicate that inequality, as measured by the Gini Coefficient, was significantly related to the level of criteria air pollutants within each county. Additionally, the results did not indicate the existence of the Environmental Kuznets Curve. Further analysis of spatial autocorrelation using ArcMap 9.2, found a high level of spatial autocorrelation among pollution emissions indicating that relation to other counties may be more important to the level of sulfur dioxide and nitrogen oxide emissions than income per capita and inequality. Lastly, the paper concludes that further Environmental Kuznets Curve and income inequality analyses in regards to air pollutant levels incorporate spatial patterns as well as other explanatory variables. (PDF contains 4 pages)
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
28 p.