975 resultados para semantic processing
Resumo:
Niklas Luhmann's theory of social systems has been widely influential in the German-speaking countries in the past few decades. However, despite its significance, particularly for organization studies, it is only very recently that Luhmann's work has attracted attention on the international stage as well. This Special Issue is in response to that. In this introductory paper, we provide a systematic overview of Luhmann's theory. Reading his work as a theory about distinction generating and processing systems, we especially highlight the following aspects: (i) Organizations are processes that come into being by permanently constructing and reconstructing themselves by means of using distinctions, which mark what is part of their realm and what not. (ii) Such an organizational process belongs to a social sphere sui generis possessing its own logic, which cannot be traced back to human actors or subjects. (iii) Organizations are a specific kind of social process characterized by a specific kind of distinction: decision, which makes up what is specifically organizational about organizations as social phenomena. We conclude by introducing the papers in this Special Issue. Copyright © 2006 SAGE.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
For more than a decade research in the field of context aware computing has aimed to find ways to exploit situational information that can be detected by mobile computing and sensor technologies. The goal is to provide people with new and improved applications, enhanced functionality and better use experience (Dey, 2001). Early applications focused on representing or computing on physical parameters, such as showing your location and the location of people or things around you. Such applications might show where the next bus is, which of your friends is in the vicinity and so on. With the advent of social networking software and microblogging sites such as Facebook and Twitter, recommender systems and so on context-aware computing is moving towards mining the social web in order to provide better representations and understanding of context, including social context. In this paper we begin by recapping different theoretical framings of context. We then discuss the problem of context- aware computing from a design perspective.
Resumo:
The present study used ERPs to compare processing of fear-relevant (FR) animals (snakes and spiders) and non-fear-relevant (NFR) animals similar in appearance (worms and beetles). EEG was recorded from 18 undergraduate participants (10 females) as they completed two animal-viewing tasks that required simple categorization decisions. Participants were divided on a post hoc basis into low snake/spider fear and high snake/spider fear groups. Overall, FR animals were rated higher on fear and elicited a larger LPC. However, individual differences qualified these effects. Participants in the low fear group showed clear differentiation between FR and NFR animals on subjective ratings of fear and LPC modulation. In contrast, participants in the high fear group did not show such differentiation between FR and NFR animals. These findings suggest that the salience of feared-FR animals may generalize on both a behavioural and electro-cortical level to other animals of similar appearance but of a non-harmful nature.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.
Resumo:
The emergence of semantic technologies to deal with the underlying meaning of things, instead of a purely syntactical representation, has led to new developments in various fields, including business process modeling. Inspired by artificial intelligence research, technologies for semantic Web services have been proposed and extended to process modeling. However, the applicablility of semantic Web services for semantic business processes is limited because business processes encompass wider requirements of business than Web services. In particular, processes are concerned with the composition of tasks, that is, in which order activities are carried out, regardless of their implementation details; resources assigned to carry out tasks, such as machinery, people, and goods; data exchange; and security and compliance concerns.
Resumo:
Using Gray and McNaughton’s (2000) revised Reinforcement Sensitivity Theory (r-RST), we examined the influence of personality on processing of words presented in gain-framed and loss-framed anti-speeding messages and how the processing biases associated with personality influenced message acceptance. The r-RST predicts that the nervous system regulates personality and that behaviour is dependent upon the activation of the Behavioural Activation System (BAS), activated by reward cues and the Fight-Flight-Freeze System (FFFS), activated by punishment cues. According to r-RST, individuals differ in the sensitivities of their BAS and FFFS (i.e., weak to strong), which in turn leads to stable patterns of behaviour in the presence of rewards and punishments, respectively. It was hypothesised that individual differences in personality (i.e., strength of the BAS and the FFFS) would influence the degree of both message processing (as measured by reaction time to previously viewed message words) and message acceptance (measured three ways by perceived message effectiveness, behavioural intentions, and attitudes). Specifically, it was anticipated that, individuals with a stronger BAS would process the words presented in the gain-frame messages faster than those with a weaker BAS and individuals with a stronger FFFS would process the words presented in the loss-frame messages faster than those with a weaker FFFS. Further, it was expected that greater processing (faster reaction times) would be associated with greater acceptance for that message. Driver licence holding students (N = 108) were recruited to view one of four anti-speeding messages (i.e., social gain-frame, social loss-frame, physical gain-frame, and physical loss-frame). A computerised lexical decision task assessed participants’ subsequent reaction times to message words, as an indicator of the extent of processing of the previously viewed message. Self-report measures assessed personality and the three message acceptance measures. As predicted, the degree of initial processing of the content of the social gain-framed message mediated the relationship between the reward sensitive trait and message effectiveness. Initial processing of the physical loss-framed message partially mediated the relationship between the punishment sensitive trait and both message effectiveness and behavioural intention ratings. These results show that reward sensitivity and punishment sensitivity traits influence cognitive processing of gain-framed and loss-framed message content, respectively, and subsequently, message effectiveness and behavioural intention ratings. Specifically, a range of road safety messages (i.e., gain-frame and loss-frame messages) could be designed which align with the processing biases associated with personality and which would target those individuals who are sensitive to rewards and those who are sensitive to punishments.
Resumo:
Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.
Resumo:
Background This paper presents a novel approach to searching electronic medical records that is based on concept matching rather than keyword matching. Aim The concept-based approach is intended to overcome specific challenges we identified in searching medical records. Method Queries and documents were transformed from their term-based originals into medical concepts as defined by the SNOMED-CT ontology. Results Evaluation on a real-world collection of medical records showed our concept-based approach outperformed a keyword baseline by 25% in Mean Average Precision. Conclusion The concept-based approach provides a framework for further development of inference based search systems for dealing with medical data.
Resumo:
Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.