28 resultados para practical epistemology analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper analyzes the performance of the unconstrained filtered-x LMS (FxLMS) algorithm for active noise control (ANC), where we remove the constraints on the controller that it must be causal and has finite impulse response. It is shown that the unconstrained FxLMS algorithm always converges to, if stable, the true optimum filter, even if the estimation of the secondary path is not perfect, and its final mean square error is independent of the secondary path. Moreover, we show that the sufficient and necessary stability condition for the feedforward unconstrained FxLMS is that the maximum phase error of the secondary path estimation must be within 90°, which is the only necessary condition for the feedback unconstrained FxLMS. The significance of the analysis on a practical system is also discussed. Finally we show how the obtained results can guide us to design a robust feedback ANC headset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage. Objective: This paper describes a new, simple and practical method for assessing our confidence in a set of requirements. Method: We identified 4 confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project. Results: We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money. Conclusion: Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical risk assessment approaches for animal diseases are influenced by the probability of release, exposure and consequences of a hazard affecting a livestock population. Once a pathogen enters into domestic livestock, potential risks of exposure and infection both to animals and people extend through a chain of economic activities related to producing, buying and selling of animals and products. Therefore, in order to understand economic drivers of animal diseases in different ecosystems and to come up with effective and efficient measures to manage disease risks from a country or region, the entire value chain and related markets for animal and product needs to be analysed to come out with practical and cost effective risk management options agreed by actors and players on those value chains. Value chain analysis enriches disease risk assessment providing a framework for interdisciplinary collaboration, which seems to be in increasing demand for problems concerning infectious livestock diseases. The best way to achieve this is to ensure that veterinary epidemiologists and social scientists work together throughout the process at all levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Value chain studies, including production system and market chain studies, are essential to value chain analysis, which when coupled with disease risk analysis is a powerful tool to identify key constraints and opportunities for disease control based on risk management in a livestock production and marketing system. Several production system and market chain studies have been conducted to support disease control interventions in South East Asia. This practical aid summarizes experiences and lessons learned from the implementation of such value chain studies in South East Asia. Based on these experiences it prioritizes the required data for the respective purpose of a value chain study and recommends data collection as well as data analysis tools. This practical aid is intended as an adjunct to the FAO value chain approach and animal diseases risk management guidelines document. Further practical advice is provided for more effective use of value chain studies in South and South East Asia as part of animal health decision support.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to clarify the potential confusion about the application of attribution analysis to real estate portfolios. Its three primary objectives are: · To review, and as far as possible reconcile, the varying approaches to attribution analysis evident in the literature. · To give a clear statement of the purposes of attribution analysis, and its meaning for real-world property managers. · To show, using real portfolio data from IPD's UK performance measurement service, the practical implications of applying different attribution methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This study aims to provide a review of brownfield policy and the emerging sustainable development agenda in the UK, and to examine the development industry’s (both commercial and residential) role and attitudes towards brownfield regeneration and contaminated land. Design/methodology/approach – The paper analyses results from a two-stage survey of commercial and residential developers carried out in mid-2004, underpinned by structured interviews with 11 developers. Findings – The results suggest that housebuilding on brownfield is no longer the preserve of specialists, and is now widespread throughout the industry in the UK. The redevelopment of contaminated sites for residential use could be threatened by the impact of the EU Landfill Directive. The findings also suggest that developers are not averse to developing on contaminated sites, although post-remediation stigma remains an issue. The market for warranties and insurance continues to evolve. Research limitations/implications – The survey is based on a sample which represents nearly 30 per cent of UK volume housebuilding. Although the response in the smaller developer groups was relatively under-represented, non-response bias was not found to be a significant issue. More research is needed to assess the way in which developers approach brownfield regeneration at a local level. Practical implications – The research suggests that clearer Government guidance in the UK is needed on how to integrate concepts of sustainability in brownfield development and that EU policy, which has been introduced for laudable aims, is creating tensions within the development industry. There may be an emphasis towards greenfield development in the future, as the implications of the Barker review are felt. Originality/value – This is a national survey of developers’ attitudes towards brownfield development in the UK, following the Barker Review, and highlights key issues in UK and EU policy layers. Keywords Brownfield sites, Contamination Paper type Research paper

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To introduce a new approach to problem based learning (PBL) used in the context of medicinal chemistry practical class teaching pharmacy students. Design: The described chemistry practical is based on independent studies by small groups of undergraduate students (4-5), who design their own practical work taking relevant professional standards into account. Students are carefully guided by feedback and acquire a set of skills important to their future profession as healthcare professionals. This model has been tailored to the application of PBL in a chemistry practical class setting for a large student cohort (150 students). Assessment: The achievement of learning outcomes is based on the submission of relevant documentation including a certificate of analysis, in addition to peer assessment. Some of the learning outcomes are also assessed in the final written examination at the end of the academic year. Conclusion: The described design of a novel PBL chemistry laboratory course for pharmacy students has been found to be successful. Self-reflective learning and engagement with feedback were encouraged, and students enjoyed the challenging learning experience. Skills that are highly essential for the students’ future careers as healthcare professionals are promoted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to explain how semiotics and constructivism can collaborate in an educational epistemology by developing a joint approach to prescientific conceptions. Empirical data and findings of constructivist research are interpreted in the light of Peirce’s semiotics. Peirce’s semiotics is an anti-psychologistic logic (CP 2.252; CP 4.551; W 8:15; Pietarinen in Signs of logic, Springer, Dordrecht, 2006; Stjernfelt in Diagrammatology. An investigation on the borderlines of phenomenology, ontology and semiotics, Springer, Dordrecht, 2007) and relational logic. Constructivism was traditionally developed within psychology and sociology and, therefore, some incompatibilities can be expected between these two schools. While acknowledging the differences, we explain that constructivism and semiotics share the assumption of realism that knowledge can only be developed upon knowledge and, therefore, an epistemological collaboration is possible. The semiotic analysis performed confirms the constructivist results and provides a further insight into the teacher-student relation. Like the constructivist approach, Peirce’s doctrine of agapism infers that the personal dimension of teaching must not be ignored. Thus, we argue for the importance of genuine sympathy in teaching attitudes. More broadly, the article also contributes to the development of postmodern humanities. At the end of the modern age, the humanities are passing through a critical period of transformation. There is a growing interest in semiotics and semiotic philosophy in many areas of the humanities. Such a case, on which we draw, is the development of a theoretical semiotic approach to education, namely edusemiotics (Stables and Semetsky, Pedagogy and edusemiotics: theoretical challenge/practical opportunities, Sense Publishers, Rotterdam, 2015).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Charities need to understand why volunteers choose one brand rather than another in order to attract more volunteers to their organisation. There has been considerable academic interest in understanding why people volunteer generally. However, this research explores the more specific question of why a volunteer chooses one charity brand rather than another. It builds on previous conceptualisations of volunteering as a consumption decision. Seen through the lens of the individual volunteer, it considers the under-researched area of the decision-making process. The research adopts an interpretivist epistemology and subjectivist ontology. Qualitative data was collected through depth interviews and analysed using both Means-End Chain (MEC) and Framework Analysis methodology. The primary contribution of the research is to theory: understanding the role of brand in the volunteer decision-making process. It identifies two roles for brand. The first is as a specific reason for choice, an ‘attribute’ of the decision. Through MEC, volunteering for a well-known brand connects directly through to a sense of self, both self-respect but also social recognition by others. All four components of the symbolic consumption construct are found in the data: volunteers choose a well-known brand to say something about themselves. The brand brings credibility and reassurance, it reduces the risk and enables the volunteer to meet their need to make a difference and achieve a sense of accomplishment. The second closely related role for brand is within the process of making the volunteering decision. Volunteers built up knowledge about the charity brands from a variety of brand touchpoints, over time. At the point of decision-making that brand knowledge and engagement becomes relevant, enabling some to make an automatic choice despite the significant level of commitment being made. The research identifies four types of decision-making behaviour. The research also makes secondary contributions to MEC methodology and to the non-profit context. It concludes within practical implications for management practice and a rich agenda for future research.