899 resultados para Image recognition and processing
Resumo:
[ 1] There has been a paucity of information on trends in daily climate and climate extremes, especially from developing countries. We report the results of the analysis of daily temperature ( maximum and minimum) and precipitation data from 14 south and west African countries over the period 1961 - 2000. Data were subject to quality control and processing into indices of climate extremes for release to the global community. Temperature extremes show patterns consistent with warming over most of the regions analyzed, with a large proportion of stations showing statistically significant trends for all temperature indices. Over 1961 to 2000, the regionally averaged occurrence of extreme cold ( fifth percentile) days and nights has decreased by - 3.7 and - 6.0 days/decade, respectively. Over the same period, the occurrence of extreme hot (95th percentile) days and nights has increased by 8.2 and 8.6 days/decade, respectively. The average duration of warm ( cold) has increased ( decreased) by 2.4 (0.5) days/decade and warm spells. Overall, it appears that the hot tails of the distributions of daily maximum temperature have changed more than the cold tails; for minimum temperatures, hot tails show greater changes in the NW of the region, while cold tails have changed more in the SE and east. The diurnal temperature range (DTR) does not exhibit a consistent trend across the region, with many neighboring stations showing opposite trends. However, the DTR shows consistent increases in a zone across Namibia, Botswana, Zambia, and Mozambique, coinciding with more rapid increases in maximum temperature than minimum temperature extremes. Most precipitation indices do not exhibit consistent or statistically significant trends across the region. Regionally averaged total precipitation has decreased but is not statistically significant. At the same time, there has been a statistically significant increase in regionally averaged daily rainfall intensity and dry spell duration. While the majority of stations also show increasing trends for these two indices, only a few of these are statistically significant. There are increasing trends in regionally averaged rainfall on extreme precipitation days and in maximum annual 5-day and 1-day rainfall, but only trends for the latter are statistically significant.
Resumo:
Three experiments examine whether simple pair-wise comparison judgments, involving the “recognition heuristic” (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. Experiments 1 & 2 show that participants frequently choose the recognized option of a pair if asked to make “larger” judgments but are significantly less likely to choose the unrecognized option when asked to make “smaller” judgments. Experiment 3 demonstrates that, overall, participants consider recognition to be a more reliable guide to judgments of a magnitude criterion than lack of recognition and that this intuition drives the framing effect. These results support the idea that, when making pair-wise comparison judgments, inferring that the recognized item is large is simpler than inferring that the unrecognized item is small.
Resumo:
Inferences consistent with “recognition-based” decision-making may be drawn for various reasons other than recognition alone. We demonstrate that, for 2-alternative forced-choice decision tasks, less-is-more effects (reduced performance with additional learning) are not restricted to recognition-based inference but can also be seen in circumstances where inference is knowledge-based but item knowledge is limited. One reason why such effects may not be observed more widely is the dependence of the effect on specific values for the validity of recognition and knowledge cues. We show that both recognition and knowledge validity may vary as a function of the number of items recognized. The implications of these findings for the special nature of recognition information, and for the investigation of recognition-based inference, are discussed
Resumo:
The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.
Resumo:
Attitudes to floristics have changed considerably during the past few decades as a result of increasing and often more focused consumer demands, heightened awareness of the threats to biodiversity, information flow and overload, and the application of electronic and web-based techniques to information handling and processing. This paper will examine these concerns in relation to our floristic knowledge and needs in the region of SW Asia. Particular reference will be made to the experience gained from the Euro+Med PlantBase project for the preparation of an electronic plant-information system for Europe and the Mediterranean, with a single core list of accepted plant names and synonyms, based on consensus taxonomy agreed by a specialist network. The many challenges Ð scientific, technical and organisational Ð that it has presented will be discussed as well as the problems of handling nontaxonomic information from fields such as conservation, karyology, biosystematics and mapping. The question of regional cooperation and the sharing of efforts and resources will also be raised and attention drawn to the recent planning workshop held in Rabat (May 2002) for establishing a technical cooperation network for taxonomic capacity building in North Africa as a possible model for the SW Asia region.
Resumo:
The emergence of the mechanical bond during the past 25 years is giving chemistry a fillip in more ways than one. While its arrival on the scene is already impacting materials science and molecular nanotechnology, it is providing a new lease of life to chemical synthesis where mechanical bond formation Occurs as a consequence of the all-important templation Orchestrated by molecular recognition and self-assembly. The way in which covalent bond formation activates noncovalent bonding interactions, switching on molecular recognition that leads to self-assembly, and the template-directed synthesis of mechanically interlocked molecules-of which the so-called catenanes and rotaxanes may be regarded as the prototypes-has introduced a level of integration into chemical synthesis that has not previously been attained jointly at the supramolecular and molecular levels. The challenge now is to carry this I vel of integration during molecular synthesis beyond relatively small molecules into the realms of precisely functionalized extended molecular Structures and superstructures that perform functions in a collective manner as the key sources of instruction, activation, and performance in multi-component integrated Circuits and devices. These forays into organic chemistry by a scientific nomad are traced through thick and thin from the Athens of the North to the Windy City by Lake Michigan with interludes on the edge of the Canadian Shield beside Lake Ontario, in the Socialist Republic of South Yorkshire, on the Plains of Cheshire beside the Wirral, in the Midlands in the Heartland of Albion, and in the City of Angels beside the Peaceful Sea. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Glucosinolates (GLSs) are found in Brassica vegetables. Examples of these sources include cabbage, Brussels sprouts, broccoli, cauliflower and various root vegetables (e.g. radish and turnip). A number of epidemiological studies have identified an inverse association between consumption of these vegetables and the risk of colon and rectal cancer. Animal studies have shown changes in enzyme activities and DNA damage resulting from consumption of Brassica vegetables or isothiocyanates, the breakdown products (BDP) of GLSs in the body. Mechanistic studies have begun to identify the ways in which the compounds may exert their protective action but the relevance of these studies to protective effects in the human alimentary tract is as yet unproven. In vitro studies with a number of specific isothiocyanates have suggested mechanisms that might be the basis of their chemoprotective effects. The concentration and composition of the GLSs in different plants, but also within a plant (e.g. in the seeds, roots or leaves), can vary greatly and also changes during plant development. Furthermore, the effects of various factors in the supply chain of Brassica vegetables including breeding, cultivation, storage and processing on intake and bioavailability of GLSs are extensively discussed in this paper.
Resumo:
A combined mathematical model for predicting heat penetration and microbial inactivation in a solid body heated by conduction was tested experimentally by inoculating agar cylinders with Salmonella typhimurium or Enterococcus faecium and heating in a water bath. Regions of growth where bacteria had survived after heating were measured by image analysis and compared with model predictions. Visualisation of the regions of growth was improved by incorporating chromogenic metabolic indicators into the agar. Preliminary tests established that the model performed satisfactorily with both test organisms and with cylinders of different diameter. The model was then used in simulation studies in which the parameters D, z, inoculum size, cylinder diameter and heating temperature were systematically varied. These simulations showed that the biological variables D, z and inoculum size had a relatively small effect on the time needed to eliminate bacteria at the cylinder axis in comparison with the physical variables heating temperature and cylinder diameter, which had a much greater relative effect. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea Price, 2001).
Resumo:
To-be-enacted material is more accessible in tests of recognition and lexical decision than material not intended for action (T. Goschke J. Kuhl, 1993; R. L. Marsh, J. L. Hicks, & M. L. Bink, 1998). This finding has been attributed to the superior status of intention-related information. The current article explores an alternative (action-superiority) account that draws parallels between the intended enactment effect (IEE) and the subject-performed task effect. Using 2 paradigms, the authors observed faster recognition latencies for both enacted and to-be-enacted material. It is crucial to note that there was no evidence of an IEE for items that had already been executed during encoding. The IEE was also eliminated when motor processing was prevented after verbal encoding. These findings suggest an overlap between overt and intended enactment and indicate that motor information may be activated for verbal material in preparation for subsequent execution.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
Purpose: The purpose of this paper is to address a classic problem – pattern formation identified by researchers in the area of swarm robotic systems – and is also motivated by the need for mathematical foundations in swarm systems. Design/methodology/approach: The work is separated out as inspirations, applications, definitions, challenges and classifications of pattern formation in swarm systems based on recent literature. Further, the work proposes a mathematical model for swarm pattern formation and transformation. Findings: A swarm pattern formation model based on mathematical foundations and macroscopic primitives is proposed. A formal definition for swarm pattern transformation and four special cases of transformation are introduced. Two general methods for transforming patterns are investigated and a comparison of the two methods is presented. The validity of the proposed models, and the feasibility of the methods investigated are confirmed on the Traer Physics and Processing environment. Originality/value: This paper helps in understanding the limitations of existing research in pattern formation and the lack of mathematical foundations for swarm systems. The mathematical model and transformation methods introduce two key concepts, namely macroscopic primitives and a mathematical model. The exercise of implementing the proposed models on physics simulator is novel.
Resumo:
A processing system comprises: input means arranged to receive at least one input group of bits representing at least one respective input number; output means arranged to output at least one output group of bits representing at least one respective output number; and processing means arranged to perform an operation on the at least one input group of bits to produce the at least one output group of bits such that the at least one output number is related to the at least one input number by a mathematical operation; and wherein each of the numbers can be any of a set of numbers which includes a series of numbers, positive infinity, negative infinity and nullity.