973 resultados para Error in substance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods: It has been estimated that medication error harms 1-2% of patients admitted to general hospitals. There has been no previous systematic review of the incidence, cause or type of medication error in mental healthcare services. Methods: A systematic literature search for studies that examined the incidence or cause of medication error in one or more stage(s) of the medication-management process in the setting of a community or hospital-based mental healthcare service was undertaken. The results in the context of the design of the study and the denominator used were examined. Results: All studies examined medication management processes, as opposed to outcomes. The reported rate of error was highest in studies that retrospectively examined drug charts, intermediate in those that relied on reporting by pharmacists to identify error and lowest in those that relied on organisational incident reporting systems. Only a few of the errors identified by the studies caused actual harm, mostly because they were detected and remedial action was taken before the patient received the drug. The focus of the research was on inpatients and prescriptions dispensed by mental health pharmacists. Conclusion: Research about medication error in mental healthcare is limited. In particular, very little is known about the incidence of error in non-hospital settings or about the harm caused by it. Evidence is available from other sources that a substantial number of adverse drug events are caused by psychotropic drugs. Some of these are preventable and might probably, therefore, be due to medication error. On the basis of this and features of the organisation of mental healthcare that might predispose to medication error, priorities for future research are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To investigate the relationship between pupil diameter and refractive error and how refractive correction, target luminance, and accommodation modulate this relationship. Methods: Sixty emmetropic, myopic, and hyperopic subjects (age range, 18 to 35 years) viewed an illuminated target (luminance: 10, 100, 200, 400, 1000, 2000, and 4100 cd/m2) within a Badal optical system, at 0 diopters (D) and −3 D vergence, with and without refractive correction. Refractive error was corrected using daily disposable contact lenses. Pupil diameter and accommodation were recorded continuously using a commercially available photorefractor. Results: No significant difference in pupil diameter was found between the refractive groups at 0 D or −3 D target vergence, in the corrected or uncorrected conditions. As expected, pupil diameter decreased with increasing luminance. Target vergence had no significant influence on pupil diameter. In the corrected condition, at 0 D target vergence, the accommodation response was similar in all refractive groups. At −3 D target vergence, the emmetropic and myopic groups accommodated significantly more than the hyperopic group at all luminance levels. There was no correlation between accommodation response and pupil diameter or refractive error in any refractive group. In the uncorrected condition, the accommodation response was significantly greater in the hyperopic group than in the myopic group at all luminance levels, particularly for near viewing. In the hyperopic group, the accommodation response was significantly correlated with refractive error but not pupil diameter. In the myopic group, accommodation response level was not correlated with refractive error or pupil diameter. Conclusions: Refractive error has no influence on pupil diameter, irrespective of refractive correction or accommodative demand. This suggests that the pupil is controlled by the pupillary light reflex and is not driven by retinal blur.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this paper is to explore the engineering lecturers' experiences of generic skills assessment within an active learning context in Malaysia. Using a case-study methodology, lecturers' assessment approaches were investigated regarding three generic skills; verbal communication, problem solving and team work. Because of the importance to learning of the assessment of such skills it is this assessment that is discussed. The findings show the lecturers' initial feedback to have been generally lacking in substance, since they have limited knowledge and experience of assessing generic skills. Typical barriers identified during the study included; generic skills not being well defined, inadequate alignment across the engineering curricula and teaching approaches, assessment practices that were too flexible, particular those to do with implementation; and a failure to keep up to date with industrial requirements. The emerging findings of the interviews reinforce the arguments that there is clearly much room for improvement in the present state of generic skills assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current study was designed to build on and extend the existing knowledge base of factors that cause, maintain, and influence child molestation. Theorized links among the type of offender and the offender's levels of moral development and social competence in the perpetration of child molestation were investigated. The conceptual framework for the study is based on the cognitive developmental stages of moral development as proposed by Kohlberg, the unified theory, or Four-Preconditions Model, of child molestation as proposed by Finkelhor, and the Information-Processing Model of Social Skills as proposed by McFall. The study sample consisted of 127 adult male child molesters participating in outpatient group therapy. All subjects completed a Self-Report Questionnaire which included questions designed to obtain relevant demographic data, questions similar to those used by the researchers for the Massachusetts Treatment Center: Child Molester Typology 3's social competency dimension, the Defining Issues Test (DIT) short form, the Social Avoidance and Distress Scale (SADS), the Rathus Assertiveness Schedule (RAS), and the Questionnaire Measure of Empathic Tendency (Empathy Scale). Data were analyzed utilizing confirmatory factor analysis, t-tests, and chi-square statistics. Partial support was found for the hypothesis that moral development is a separate but correlated construct from social competence. As predicted, although the actual mean score differences were small, a statistically significant difference was found in the current study between the mean DITP scores of the subject sample and that of the general male population, suggesting that child molesters, as a group, function at a lower level of moral development than does the general male population, and the situational offenders in the study sample demonstrated a statistically significantly higher level of moral development than the preferential offenders. The data did not support the hypothesis that situational offenders will demonstrate lower levels of social competence than preferential offenders. Relatively little significance is placed on this finding, however, because the measure for the social competency variable was likely subject to considerable measurement error in that the items used as indicators were not clearly defined. The last hypothesis, which involved the potential differences in social anxiety, assertion skills, and empathy between the situational and preferential offender types, was not supported by the data. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical imaging is an emerging technology towards non-invasive breast cancer diagnostics. In recent years, portable and patient comfortable hand-held optical imagers are developed towards two-dimensional (2D) tumor detections. However, these imagers are not capable of three-dimensional (3D) tomography because they cannot register the positional information of the hand-held probe onto the imaged tissue. A hand-held optical imager has been developed in our Optical Imaging Laboratory with 3D tomography capabilities, as demonstrated from tissue phantom studies. The overall goal of my dissertation is towards the translation of our imager to the clinical setting for 3D tomographic imaging in human breast tissues. A systematic experimental approach was designed and executed as follows: (i) fast 2D imaging, (ii) coregistered imaging, and (iii) 3D tomographic imaging studies. (i) Fast 2D imaging was initially demonstrated in tissue phantoms (1% Liposyn solution) and in vitro (minced chicken breast and 1% Liposyn). A 0.45 cm3 fluorescent target at 1:0 contrast ratio was detectable up to 2.5 cm deep. Fast 2D imaging experiments performed in vivo with healthy female subjects also detected a 0.45 cm3 fluorescent target superficially placed ∼2.5 cm under the breast tissue. (ii) Coregistered imaging was automated and validated in phantoms with ∼0.19 cm error in the probe’s positional information. Coregistration also improved the target depth detection to 3.5 cm, from multi-location imaging approach. Coregistered imaging was further validated in-vivo , although the error in probe’s positional information increased to ∼0.9 cm (subject to soft tissue deformation and movement). (iii) Three-dimensional tomography studies were successfully demonstrated in vitro using 0.45 cm3 fluorescence targets. The feasibility of 3D tomography was demonstrated for the first time in breast tissues using the hand-held optical imager, wherein a 0.45 cm3 fluorescent target (superficially placed) was recovered along with artifacts. Diffuse optical imaging studies were performed in two breast cancer patients with invasive ductal carcinoma. The images showed greater absorption at the tumor cites (as observed from x-ray mammography, ultrasound, and/or MRI). In summary, my dissertation demonstrated the potential of a hand-held optical imager towards 2D breast tumor detection and 3D breast tomography, holding a promise for extensive clinical translational efforts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La percepción clara y distinta es el elemento sobre el que se asienta la certeza metafísica de Descartes. Con todo, el planteamiento de los argumentos escépticos referidos a la duda metódica cartesiana ha evidenciado la necesidad de hallar una justificación al propio criterio de la percepción clara y distinta. Frente a los intentos basados en la indubitabilidad de la percepción o en la garantía surgida de la bondad divina, se defenderá una justificación alternativa pragmatista.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ankle sprains are the most common injuries in sports, usually causing damage to the lateral ligaments. Recurrence has as usual result permanent instability, and thus loss of proprioception. This fact, together with residual symptoms, is what is known as chronic ankle instability, CAI, or FAI, if it is functional. This problem tries to be solved by improving musculoskeletal stability and proprioception by the application of bandages and performing exercises. The aim of this study has been to review articles (meta-analisis, systematic reviews and revisions) published in 2009-2015 in PubMed, Medline, ENFISPO and BUCea, using keywords such as “sprain instability”, “sprain proprioception”, “chronic ankle instability”. Evidence affirms that there does exist decreased proprioception in patients who suffer from CAI. Rehabilitation exercise regimen is indicated as a treatment because it generates a subjective improvement reported by the patient, and the application of bandages works like a sprain prevention method limiting the range of motion, reducing joint instability and increasing confidence during exercise. As podiatrists we should recommend proprioception exercises to all athletes in a preventive way, and those with CAI or FAI, as a rehabilitation programme, together with the application of bandages. However, further studies should be generated focusing on ways of improving proprioception, and on the exercise patterns that provide the maximum benefit.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Swells are found in all oceans and strongly influence the wave climate and air-sea processes. The poorly known swell dissipation is the largest source of error in wave forecasts and hindcasts. We use synthetic aperture radar data to identify swell sources and trajectories, allowing a statistically significant estimation of swell dissipation. We mined the entire Envisat mission 2003–2012 to find suitable storms with swells (13 < T < 18 s) that are observed several times along their propagation. This database of swell events provides a comprehensive view of swell extending previous efforts. The analysis reveals that swell dissipation weakly correlates with the wave steepness, wind speed, orbital wave velocity, and the relative direction of wind and waves. Although several negative dissipation rates are found, there are uncertainties in the synthetic aperture radar-derived swell heights and dissipation rates. An acceptable range of the swell dissipation rate is −0.1 to 6 × 10−7 m−1 with a median of 1 × 10−7 m−1.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.