838 resultados para Precision and recall


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analogy plays a central role in legal reasoning, yet how to analogize is poorly taught and poorly practiced. We all recognize when legal analogies are being made: when a law professor suggests a difficult hypothetical in class and a student tentatively guesses at the answer based on the cases she read the night before, when an attorney advises a client to settle because a previous case goes against him, or when a judge adopts one precedent over another on the basis that it better fits the present case. However, when it comes to explaining why certain analogies are compelling, persuasive, or better than the alternative, lawyers usually draw a blank. The purpose of this article is to provide a simple model that can be used to teach and to learn how analogy actually works, and what makes one analogy superior to a competing analogy. The model is drawn from a number of theories of analogy making in cognitive science. Cognitive science is the “long-term enterprise to understand the mind scientifically.” The field studies the mechanisms that are involved in cognitive processes like thinking, memory, learning, and recall; and one of its main foci has been on how people construct analogies. The lessons from cognitive science theories of analogy can be applied to legal analogies to give students and lawyers a better understanding of this fundamental process in legal reasoning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As part of a wider study to develop an ecosystem-health monitoring program for wadeable streams of south-eastern Queensland, Australia, comparisons were made regarding the accuracy, precision and relative efficiency of single-pass backpack electrofishing and multiple-pass electrofishing plus supplementary seine netting to quantify fish assemblage attributes at two spatial scales (within discrete mesohabitat units and within stream reaches consisting of multiple mesohabitat units). The results demonstrate that multiple-pass electrofishing plus seine netting provide more accurate and precise estimates of fish species richness, assemblage composition and species relative abundances in comparison to single-pass electrofishing alone, and that intensive sampling of three mesohabitat units (equivalent to a riffle-run-pool sequence) is a more efficient sampling strategy to estimate reach-scale assemblage attributes than less intensive sampling over larger spatial scales. This intensive sampling protocol was sufficiently sensitive that relatively small differences in assemblage attributes (<20%) could be detected with a high statistical power (1-β > 0.95) and that relatively few stream reaches (<4) need be sampled to accurately estimate assemblage attributes close to the true population means. The merits and potential drawbacks of the intensive sampling strategy are discussed, and it is deemed to be suitable for a range of monitoring and bioassessment objectives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Monitoring gases for environmental, industrial and agricultural fields is a demanding task that requires long periods of observation, large quantity of sensors, data management, high temporal and spatial resolution, long term stability, recalibration procedures, computational resources, and energy availability. Wireless Sensor Networks (WSNs) and Unmanned Aerial Vehicles (UAVs) are currently representing the best alternative to monitor large, remote, and difficult access areas, as these technologies have the possibility of carrying specialised gas sensing systems, and offer the possibility of geo-located and time stamp samples. However, these technologies are not fully functional for scientific and commercial applications as their development and availability is limited by a number of factors: the cost of sensors required to cover large areas, their stability over long periods, their power consumption, and the weight of the system to be used on small UAVs. Energy availability is a serious challenge when WSN are deployed in remote areas with difficult access to the grid, while small UAVs are limited by the energy in their reservoir tank or batteries. Another important challenge is the management of data produced by the sensor nodes, requiring large amount of resources to be stored, analysed and displayed after long periods of operation. In response to these challenges, this research proposes the following solutions aiming to improve the availability and development of these technologies for gas sensing monitoring: first, the integration of WSNs and UAVs for environmental gas sensing in order to monitor large volumes at ground and aerial levels with a minimum of sensor nodes for an effective 3D monitoring; second, the use of solar energy as a main power source to allow continuous monitoring; and lastly, the creation of a data management platform to store, analyse and share the information with operators and external users. The principal outcomes of this research are the creation of a gas sensing system suitable for monitoring any kind of gas, which has been installed and tested on CH4 and CO2 in a sensor network (WSN) and on a UAV. The use of the same gas sensing system in a WSN and a UAV reduces significantly the complexity and cost of the application as it allows: a) the standardisation of the signal acquisition and data processing, thereby reducing the required computational resources; b) the standardisation of calibration and operational procedures, reducing systematic errors and complexity; c) the reduction of the weight and energy consumption, leading to an improved power management and weight balance in the case of UAVs; d) the simplification of the sensor node architecture, which is easily replicated in all the nodes. I evaluated two different sensor modules by laboratory, bench, and field tests: a non-dispersive infrared module (NDIR) and a metal-oxide resistive nano-sensor module (MOX nano-sensor). The tests revealed advantages and disadvantages of the two modules when used for static nodes at the ground level and mobile nodes on-board a UAV. Commercial NDIR modules for CO2 have been successfully tested and evaluated in the WSN and on board of the UAV. Their advantage is the precision and stability, but their application is limited to a few gases. The advantages of the MOX nano-sensors are the small size, low weight, low power consumption and their sensitivity to a broad range of gases. However, selectivity is still a concern that needs to be addressed with further studies. An electronic board to interface sensors in a large range of resistivity was successfully designed, created and adapted to operate on ground nodes and on-board UAV. The WSN and UAV created were powered with solar energy in order to facilitate outdoor deployment, data collection and continuous monitoring over large and remote volumes. The gas sensing, solar power, transmission and data management systems of the WSN and UAV were fully evaluated by laboratory, bench and field testing. The methodology created to design, developed, integrate and test these systems was extensively described and experimentally validated. The sampling and transmission capabilities of the WSN and UAV were successfully tested in an emulated mission involving the detection and measurement of CO2 concentrations in a field coming from a contaminant source; the data collected during the mission was transmitted in real time to a central node for data analysis and 3D mapping of the target gas. The major outcome of this research is the accomplishment of the first flight mission, never reported before in the literature, of a solar powered UAV equipped with a CO2 sensing system in conjunction with a network of ground sensor nodes for an effective 3D monitoring of the target gas. A data management platform was created using an external internet server, which manages, stores, and shares the data collected in two web pages, showing statistics and static graph images for internal and external users as requested. The system was bench tested with real data produced by the sensor nodes and the architecture of the platform was widely described and illustrated in order to provide guidance and support on how to replicate the system. In conclusion, the overall results of the project provide guidance on how to create a gas sensing system integrating WSNs and UAVs, how to power the system with solar energy and manage the data produced by the sensor nodes. This system can be used in a wide range of outdoor applications, especially in agriculture, bushfires, mining studies, zoology, and botanical studies opening the way to an ubiquitous low cost environmental monitoring, which may help to decrease our carbon footprint and to improve the health of the planet.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electric distribution networks are now in the era of transition from passive to active distribution networks with the integration of energy storage devices. Optimal usage of batteries and voltage control devices along with other upgrades in network needs a distribution expansion planning (DEP) considering inter-temporal dependencies of stages. This paper presents an efficient approach for solving multi-stage distribution expansion planning problems (MSDEPP) based on a forward-backward approach considering energy storage devices such as batteries and voltage control devices such as voltage regulators and capacitors. The proposed algorithm is compared with three other techniques including full dynamic, forward fill-in, backward pull-out from the point of view of their precision and their computational efficiency. The simulation results for the IEEE 13 bus network show the proposed pseudo-dynamic forward-backward approach presents good efficiency in precision and time of optimization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The literature on “entrepreneurial opportunities” has grown rapidly since the publication of Shane and Venkataraman (2000). By directing attention to the earliest stages of development of new economic activities and organizations, this marks sound redirection of entrepreneurship research. However, our review shows that theoretical and empirical progress has been limited on important aspects of the role of “opportunities” and their interaction with actors, i.e., the “nexus”. We argue that this is rooted in inherent and inescapable problems with the “opportunity” construct itself, when applied in the context of a prospective, micro-level (i.e., individual[s], venture, or individual–venture dyad) view of entrepreneurial processes. We therefore suggest a fundamental re-conceptualization using the constructs External Enablers, New Venture Ideas, and Opportunity Confidence to capture the many important ideas commonly discussed under the “opportunity” label. This re-conceptualization makes important distinctions where prior conceptions have been blurred: between explananda and explanantia; between actor and the entity acted upon; between external conditions and subjective perceptions, and between the contents and the favorability of the entity acted upon. These distinctions facilitate theoretical precision and can guide empirical investigation towards more fruitful designs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using DNA markers in plant breeding with marker-assisted selection (MAS) could greatly improve the precision and efficiency of selection, leading to the accelerated development of new crop varieties. The numerous examples of MAS in rice have prompted many breeding institutes to establish molecular breeding labs. The last decade has produced an enormous amount of genomics research in rice, including the identification of thousands of QTLs for agronomically important traits, the generation of large amounts of gene expression data, and cloning and characterization of new genes, including the detection of single nucleotide polymorphisms. The pinnacle of genomics research has been the completion and annotation of genome sequences for indica and japonica rice. This information-coupled with the development of new genotyping methodologies and platforms, and the development of bioinformatics databases and software tools-provides even more exciting opportunities for rice molecular breeding in the 21st century. However, the great challenge for molecular breeders is to apply genomics data in actual breeding programs. Here, we review the current status of MAS in rice, current genomics projects and promising new genotyping methodologies, and evaluate the probable impact of genomics research. We also identify critical research areas to "bridge the application gap" between QTL identification and applied breeding that need to be addressed to realize the full potential of MAS, and propose ideas and guidelines for establishing rice molecular breeding labs in the postgenome sequence era to integrate molecular breeding within the context of overall rice breeding and research programs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hip height, body condition, subcutaneous fat, eye muscle area, percentage Bos taurus, fetal age and diet digestibility data were collected at 17 372 assessments on 2181 Brahman and tropical composite (average 28% Brahman) female cattle aged between 0.5 and 7.5 years of age at five sites across Queensland. The study validated the subtraction of previously published estimates of gravid uterine weight to correct liveweight to the non-pregnant status. Hip height and liveweight were linearly related (Brahman: P<0.001, R-2 = 58%; tropical composite P<0.001, R-2 = 67%). Liveweight varied by 12-14% per body condition score (5-point scale) as cows differed from moderate condition (P<0.01). Parallel effects were also found due to subcutaneous rump fat depth and eye muscle area, which were highly correlated with each other and body condition score (r = 0.7-0.8). Liveweight differed from average by 1.65-1.66% per mm of rump fat depth and 0.71-0.76% per cm(2) of eye muscle area (P<0.01). Estimated dry matter digestibility of pasture consumed had no consistent effect in predicting liveweight and was therefore excluded from final models. A method developed to estimate full liveweight of post-weaning age female beef cattle from the other measures taken predicted liveweight to within 10 and 23% of that recorded for 65 and 95% of cases, respectively. For a 95% chance of predicted group average liveweight (body condition score used) being within 5, 4, 3, 2 and 1% of actual group average liveweight required 23, 36, 62, 137 and 521 females, respectively, if precision and accuracy of measurements matches that used in the research. Non-pregnant Bos taurus female cattle were calculated to be 10-40% heavier than Brahmans at the same hip height and body condition, indicating a substantial conformational difference. The liveweight prediction method was applied to a validation population of 83 unrelated groups of cattle weighed in extensive commercial situations on 119 days over 18 months (20 917 assessments). Liveweight prediction in the validation population exceeded average recorded liveweight for weigh groups by an average of 19 kg (similar to 6%) demonstrating the difficulty of achieving accurate and precise animal measurements under extensive commercial grazing conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To quantify the impact that planting indigenous trees and shrubs in mixed communities (environmental plantings) have on net sequestration of carbon and other environmental or commercial benefits, precise and non-biased estimates of biomass are required. Because these plantings consist of several species, estimation of their biomass through allometric relationships is a challenging task. We explored methods to accurately estimate biomass through harvesting 3139 trees and shrubs from 22 plantings, and collating similar datasets from earlier studies, in non-arid (>300mm rainfallyear-1) regions of southern and eastern Australia. Site-and-species specific allometric equations were developed, as were three types of generalised, multi-site, allometric equations based on categories of species and growth-habits: (i) species-specific, (ii) genus and growth-habit, and (iii) universal growth-habit irrespective of genus. Biomass was measured at plot level at eight contrasting sites to test the accuracy of prediction of tonnes dry matter of above-ground biomass per hectare using different classes of allometric equations. A finer-scale analysis tested performance of these at an individual-tree level across a wider range of sites. Although the percentage error in prediction could be high at a given site (up to 45%), it was relatively low (<11%) when generalised allometry-predictions of biomass was used to make regional- or estate-level estimates across a range of sites. Precision, and thus accuracy, increased slightly with the level of specificity of allometry. Inclusion of site-specific factors in generic equations increased efficiency of prediction of above-ground biomass by as much as 8%. Site-and-species-specific equations are the most accurate for site-based predictions. Generic allometric equations developed here, particularly the generic species-specific equations, can be confidently applied to provide regional- or estate-level estimates of above-ground biomass and carbon. © 2013 Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction QC, EQA and method evaluation are integral to delivery of quality patient results. To ensure QUT graduates have a solid grounding in these key areas of practice, a theory-to-practice approach is used to progressively develop and consolidate these skills. Methods Using a BCG assay for serum albumin, each student undertakes an eight week project analysing two levels of QC alongside ‘patient’ samples. Results are assessed using both single rules and Multirules. Concomitantly with the QC analyses, an EQA project is undertaken; students analyse two EQA samples, twice in the semester. Results are submitted using cloud software and data for the full ‘peer group’ returned to students in spreadsheets and incomplete Youden plots. Youden plots are completed with target values and calculated ALP values and analysed for ‘lab’ and method performance. The method has a low-level positive bias, which leads to the need to investigate an alternative method. Building directly on the EQA of the first project and using the scenario of a lab that services renal patients, students undertake a method validation comparing BCP and BCG assays in another eight-week project. Precision and patient comparison studies allow students to assess whether the BCP method addresses the proportional bias of the BCG method and overall is a ‘better’ alternative method for analysing serum albumin, accounting for pragmatic factors, such as cost, as well as performance characteristics. Results Students develop understanding of the purpose and importance of QC and EQA in delivering quality results, the need to optimise testing to deliver quality results and importantly, a working knowledge of the analyses that go into ensuring this quality. In parallel to developing these key workplace competencies, students become confident, competent practitioners, able to pipette accurately and precisely and organise themselves in a busy, time pressured work environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is grounded on four articles. Article I generally examines the factors affecting dental service utilization. Article II studies the factors associated with sector-specific utilization among young adults entitled to age-based subsidized dental care. Article III explores the determinants of dental ill-health as measured by the occurrence of caries and the relationship between dental ill-health and dental care use. Article IV measures and explains income-related inequality in utilization. Data employed were from the 1996 Finnish Health Care Survey (I, II, IV) and the 1997 follow-up study included in the longitudinal study of the Northern Finland 1966 Birth Cohort (III). Utilization is considered as a multi-stage decision-making process and measured as the number of visits to the dentist. Modified count data models and concentration and horizontal equity indices were applied. Dentist s recall appeared very efficient at stimulating individuals to seek care. Dental pain, recall, and the low number of missing teeth positively affected utilization. Public subvention for dental care did not seem to statistically increase utilization. Among young adults, a perception of insufficient public service availability and recall were positively associated with the choice of a private dentist, whereas income and dentist density were positively associated with the number of visits to private dentists. Among cohort females, factors increasing caries were body mass index and intake of alcohol, sugar, and soft drinks and those reducing caries were birth weight and adolescent school achievement. Among cohort males, caries was positively related to the metropolitan residence and negatively related to healthy diet and education. Smoking increased caries, whereas regular teeth brushing, regular dental attendance and dental care use decreased caries. We found equity in young adults utilization but pro-rich inequity in the total number of visits to all dentists and in the probability of visiting a dentist for the whole sample. We observed inequity in the total number of visits to the dentist and in the probability of visiting a dentist, being pro-poor for public care but pro-rich for private care. The findings suggest that to enhance equal access to and use of dental care across population and income groups, attention should focus on supply factors and incentives to encourage people to contact dentists more often. Lowering co-payments and service fees and improving public availability would likely increase service use in both sectors. To attain favorable oral health, appropriate policies aimed at improving dental health education and reducing the detrimental effects of common risk factors on dental health should be strengthened. Providing equal access with respect to need for all people ought to take account of the segmentation of the service system, with its two parallel delivery systems and different supplier incentives to patients and dentists.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The adequacy of anesthesia has been studied since the introduction of balanced general anesthesia. Commercial monitors based on electroencephalographic (EEG) signal analysis have been available for monitoring the hypnotic component of anesthesia from the beginning of the 1990s. Monitors measuring the depth of anesthesia assess the cortical function of the brain, and have gained acceptance during surgical anesthesia with most of the anesthetic agents used. However, due to frequent artifacts, they are considered unsuitable for monitoring consciousness in intensive care patients. The assessment of analgesia is one of the cornerstones of general anesthesia. Prolonged surgical stress may lead to increased morbidity and delayed postoperative recovery. However, no validated monitoring method is currently available for evaluating analgesia during general anesthesia. Awareness during anesthesia is caused by an inadequate level of hypnosis. This rare but severe complication of general anesthesia may lead to marked emotional stress and possibly posttraumatic stress disorder. In the present series of studies, the incidence of awareness and recall during outpatient anesthesia was evaluated and compared with that of in inpatient anesthesia. A total of 1500 outpatients and 2343 inpatients underwent a structured interview. Clear intraoperative recollections were rare the incidence being 0.07% in outpatients and 0.13% in inpatients. No significant differences emerged between outpatients and inpatients. However, significantly smaller doses of sevoflurane were administered to outpatients with awareness than those without recollections (p<0.05). EEG artifacts in 16 brain-dead organ donors were evaluated during organ harvest surgery in a prospective, open, nonselective study. The source of the frontotemporal biosignals in brain-dead subjects was studied, and the resistance of bispectral index (BIS) and Entropy to the signal artifacts was compared. The hypothesis was that in brain-dead subjects, most of the biosignals recorded from the forehead would consist of artifacts. The original EEG was recorded and State Entropy (SE), Response Entropy (RE), and BIS were calculated and monitored during solid organ harvest. SE differed from zero (inactive EEG) in 28%, RE in 29%, and BIS in 68% of the total recording time (p<0.0001 for all). The median values during the operation were SE 0.0, RE 0.0, and BIS 3.0. In four of the 16 organ donors, EEG was not inactive, and unphysiologically distributed, nonreactive rhythmic theta activity was present in the original EEG signal. After the results from subjects with persistent residual EEG activity were excluded, SE, RE, and BIS differed from zero in 17%, 18%, and 62% of the recorded time, respectively (p<0.0001 for all). Due to various artifacts, the highest readings in all indices were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, electromyography (EMG), 50-Hz artifact, handling of the donor, ballistocardiography, and electrocardiography. In a prospective, randomized study of 26 patients, the ability of Surgical Stress Index (SSI) to differentiate patients with two clinically different analgesic levels during shoulder surgery was evaluated. SSI values were lower in patients with an interscalene brachial plexus block than in patients without an additional plexus block. In all patients, anesthesia was maintained with desflurane, the concentration of which was targeted to maintain SE at 50. Increased blood pressure or heart rate (HR), movement, and coughing were considered signs of intraoperative nociception and treated with alfentanil. Photoplethysmographic waveforms were collected from the contralateral arm to the operated side, and SSI was calculated offline. Two minutes after skin incision, SSI was not increased in the brachial plexus block group and was lower (38 ± 13) than in the control group (58 ± 13, p<0.005). Among the controls, one minute prior to alfentanil administration, SSI value was higher than during periods of adequate antinociception, 59 ± 11 vs. 39 ± 12 (p<0.01). The total cumulative need for alfentanil was higher in controls (2.7 ± 1.2 mg) than in the brachial plexus block group (1.6 ± 0.5 mg, p=0.008). Tetanic stimulation to the ulnar region of the hand increased SSI significantly only among patients with a brachial plexus block not covering the site of stimulation. Prognostic value of EEG-derived indices was evaluated and compared with Transcranial Doppler Ultrasonography (TCD), serum neuron-specific enolase (NSE) and S-100B after cardiac arrest. Thirty patients resuscitated from out-of-hospital arrest and treated with induced mild hypothermia for 24 h were included. Original EEG signal was recorded, and burst suppression ratio (BSR), RE, SE, and wavelet subband entropy (WSE) were calculated. Neurological outcome during the six-month period after arrest was assessed with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). Twenty patients had a CPC of 1-2, one patient had a CPC of 3, and nine patients died (CPC 5). BSR, RE, and SE differed between good (CPC 1-2) and poor (CPC 3-5) outcome groups (p=0.011, p=0.011, p=0.008, respectively) during the first 24 h after arrest. WSE was borderline higher in the good outcome group between 24 and 48 h after arrest (p=0.050). All patients with status epilepticus died, and their WSE values were lower (p=0.022). S-100B was lower in the good outcome group upon arrival at the intensive care unit (p=0.010). After hypothermia treatment, NSE and S-100B values were lower (p=0.002 for both) in the good outcome group. The pulsatile index was also lower in the good outcome group (p=0.004). In conclusion, the incidence of awareness in outpatient anesthesia did not differ from that in inpatient anesthesia. Outpatients are not at increased risk for intraoperative awareness relative to inpatients undergoing general anesthesia. SE, RE, and BIS showed non-zero values that normally indicate cortical neuronal function, but were in these subjects mostly due to artifacts after clinical brain death diagnosis. Entropy was more resistant to artifacts than BIS. During general anesthesia and surgery, SSI values were lower in patients with interscalene brachial plexus block covering the sites of nociceptive stimuli. In detecting nociceptive stimuli, SSI performed better than HR, blood pressure, or RE. BSR, RE, and SE differed between the good and poor neurological outcome groups during the first 24 h after cardiac arrest, and they may be an aid in differentiating patients with good neurological outcomes from those with poor outcomes after out-of-hospital cardiac arrest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We compare results of bottom trawl surveys off Washington, Oregon, and California in 1977, 1980, 1983, and 1986 to discern trends in population abundance, distribution, and biology. Catch per unit of effort, area-swept biomass estimates, and age and length compositions for 12 commercially important west coast groundfishes are presented to illustrate trends over the lO-year period. We discuss the precision, accuracy, and statistical significance of observed trends in abundance estimates. The influence of water temperature on the distribution of groundfishes is also briefly examined. Abundance estimates of canary rockfish, Sebastes pinniger, and yellowtail rockfish, S. Jlavidus, declined during the study period; greater declines were observed in Pacific ocean perch, S. alutus, lingcod, Ophiodon elongatus, and arrowtooth flounder, Atheresthes stomias. Biomass estimates of Pacific hake, Merluccius productus, and English, rex, and Dover soles (Pleuronectes vetulus, Errex zachirus, and Microstomus pacificus) increased, while bocaccio, S. paucispinis, and chilipepper, S. goodei, were stable. Sablefish, Anoplopoma fimbria, biomass estimates increased markedly from 1977 to 1980 and declined moderately thereafter. Precision was lowest for rockfishes, lingcod, and sablefish; it was highest for flatfishes because they were uniformly distributed. The accuracy of survey estimates could be gauged only for yellowtail and canary rockfish and sablefish. All fishery-based analyses produced much larger estimates of abundance than bottom trawl surveys-indicative of the true catchability of survey trawls. Population trends from all analyses compared well except in canary rockfish, the species that presents the greatest challenge to obtaining reasonable precision and one that casts doubts on the usefulness of bottom trawl surveys for estimating its abundance. (PDF file contains 78 pages.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.