891 resultados para information source
Resumo:
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Resumo:
Airborne fine particles were collected at a suburban site in Queensland, Australia between 1995 and 2003. The samples were analysed for 21 elements, and Positive Matrix Factorisation (PMF), Preference Ranking Organisation METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA) were applied to the data. PROMETHEE provided information on the ranking of pollutant levels from the sampling years while PMF provided insights into the sources of the pollutants, their chemical composition, most likely locations and relative contribution to the levels of particulate pollution at the site. PROMETHEE and GAIA found that the removal of lead from fuel in the area had a significant impact on the pollution patterns while PMF identified 6 pollution sources including: Railways (5.5%), Biomass Burning (43.3%), Soil (9.2%), Sea Salt (15.6%), Aged Sea Salt (24.4%) and Motor Vehicles (2.0%). Thus the results gave information that can assist in the formulation of mitigation measures for air pollution.
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
Information and communication technologies (ICTs) are essential components of the knowledge economy, and have an immense complementary role in innovation, education, knowledge creation, and relations with government, civil society, and business within city regions. The ability to create, distribute, and exploit knowledge has become a major source of competitive advantage, wealth creation, and improvements in the new regional policies. Growing impact of ICTs on the economy and society, rapid application of recent scientific advances in new products and processes, shifting to more knowledge-intensive industry and services, and rising skill requirements have become crucial concepts for urban and regional competitiveness. Therefore, harnessing ICTs for knowledge-based urban development (KBUD) has a significant impact on urban and regional growth (Yigitcanlar, 2005). In this sense, e-region is a novel concept utilizing ICTs for regional development. Since the Helsinki European Council announced Turkey as a candidate for European Union (EU) membership in 1999, the candidacy has accelerated the speed of regional policy enhancements and adoption of the European regional policy standards. These enhancements and adoption include the generation of a new regional spatial division, NUTS-II statistical regions; a new legislation on the establishment of regional development agencies (RDAs); and new orientations in the field of high education, science, and technology within the framework of the EU’s Lisbon Strategy and the Bologna Process. The European standards posed an ambitious new agenda in the development and application of contemporary regional policy in Turkey (Bilen, 2005). In this sense, novel regional policies in Turkey necessarily endeavor to include information society objectives through efficient use of new technologies such as ICTs. Such a development seeks to be based on tangible assets of the region (Friedmann, 2006) as well as the best practices deriving from grounding initiatives on urban and local levels. These assets provide the foundation of an e-region that harnesses regional development in an information society context. With successful implementations, the Marmara region’s local governments in Turkey are setting the benchmark for the country in the implementation of spatial information systems and e-governance, and moving toward an e-region. Therefore, this article aims to shed light on organizational and regional realities of recent practices of ICT applications and their supply instruments based on evidence from selected local government organizations in the Marmara region. This article also exemplifies challenges and opportunities of the region in moving toward an e-region and provides a concise review of different ICT applications and strategies in a broader urban and regional context. The article is organized in three parts. The following section scrutinizes the e-region framework and the role of ICTs in regional development. Then, Marmara’s opportunities and challenges in moving toward an e-region are discussed in the context of ICT applications and their supply instruments based on public-sector projects, policies, and initiatives. Subsequently, the last section discusses conclusions and prospective research.
Resumo:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.
Resumo:
We present a novel method and instrument for in vivo imaging and measurement of the human corneal dynamics during an air puff. The instrument is based on high-speed swept source optical coherence tomography (ssOCT) combined with a custom adapted air puff chamber from a non-contact tonometer, which uses an air stream to deform the cornea in a non-invasive manner. During the short period of time that the deformation takes place, the ssOCT acquires multiple A-scans in time (M-scan) at the center of the air puff, allowing observation of the dynamics of the anterior and posterior corneal surfaces as well as the anterior lens surface. The dynamics of the measurement are driven by the biomechanical properties of the human eye as well as its intraocular pressure. Thus, the analysis of the M-scan may provide useful information about the biomechanical behavior of the anterior segment during the applanation caused by the air puff. An initial set of controlled clinical experiments are shown to comprehend the performance of the instrument and its potential applicability to further understand the eye biomechanics and intraocular pressure measurements. Limitations and possibilities of the new apparatus are discussed.
Resumo:
Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.
Resumo:
Open-source software systems have become a viable alternative to proprietary systems. We collected data on the usage of an open-source workflow management system developed by a university research group, and examined this data with a focus on how three different user cohorts – students, academics and industry professionals – develop behavioral intentions to use the system. Building upon a framework of motivational components, we examined the group differences in extrinsic versus intrinsic motivations on continued usage intentions. Our study provides a detailed understanding of the use of open-source workflow management systems in different user communities. Moreover, it discusses implications for the provision of workflow management systems, the user-specific management of open-source systems and the development of services in the wider user community.
Resumo:
In keeping with the proliferation of free software development initiatives and the increased interest in the business process management domain, many open source workflow and business process management systems have appeared during the last few years and are now under active development. This upsurge gives rise to two important questions: What are the capabilities of these systems? and How do they compare to each other and to their closed source counterparts? In other words: What is the state-of-the-art in the area?. To gain an insight into these questions, we have conducted an in-depth analysis of three of the major open source workflow management systems – jBPM, OpenWFE, and Enhydra Shark, the results of which are reported here. This analysis is based on the workflow patterns framework and provides a continuation of the series of evaluations performed using the same framework on closed source systems, business process modelling languages, and web-service composition standards. The results from evaluations of the three open source systems are compared with each other and also with the results from evaluations of three representative closed source systems: Staffware, WebSphere MQ, and Oracle BPEL PM. The overall conclusion is that open source systems are targeted more toward developers rather than business analysts. They generally provide less support for the patterns than closed source systems, particularly with respect to the resource perspective, i.e. the various ways in which work is distributed amongst business users and managed through to completion.
Resumo:
Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.
Resumo:
House dust is a heterogeneous matrix, which contains a number of biological materials and particulate matter gathered from several sources. It is the accumulation of a number of semi-volatile and non-volatile contaminants. The contaminants are trapped and preserved. Therefore, house dust can be viewed as an archive of both the indoor and outdoor air pollution. There is evidence to show that on average, people tend to stay indoors most of the time and this increases exposure to house dust. The aims of this investigation were to: " assess the levels of Polycyclic Aromatic Hydrocarbons (PAHs), elements and pesticides in the indoor environment of the Brisbane area; " identify and characterise the possible sources of elemental constituents (inorganic elements), PAHs and pesticides by means of Positive Matrix Factorisation (PMF); and " establish the correlations between the levels of indoor air pollutants (PAHs, elements and pesticides) with the external and internal characteristics or attributes of the buildings and indoor activities by means of multivariate data analysis techniques. The dust samples were collected during the period of 2005-2007 from homes located in different suburbs of Brisbane, Ipswich and Toowoomba, in South East Queensland, Australia. A vacuum cleaner fitted with a paper bag was used as a sampler for collecting the house dust. A survey questionnaire was filled by the house residents which contained information about the indoor and outdoor characteristics of their residences. House dust samples were analysed for three different pollutants: Pesticides, Elements and PAHs. The analyses were carried-out for samples of particle size less than 250 µm. The chemical analyses for both pesticides and PAHs were performed using a Gas Chromatography Mass Spectrometry (GC-MS), while elemental analysis was carried-out by using Inductively-Coupled Plasma-Mass Spectroscopy (ICP-MS). The data was subjected to multivariate data analysis techniques such as multi-criteria decision-making procedures, Preference Ranking Organisation Method for Enrichment Evaluations (PROMETHEE), coupled with Geometrical Analysis for Interactive Aid (GAIA) in order to rank the samples and to examine data display. This study showed that compared to the results from previous works, which were carried-out in Australia and overseas, the concentrations of pollutants in house dusts in Brisbane and the surrounding areas were relatively very high. The results of this work also showed significant correlations between some of the physical parameters (types of building material, floor level, distance from industrial areas and major road, and smoking) and the concentrations of pollutants. Types of building materials and the age of houses were found to be two of the primary factors that affect the concentrations of pesticides and elements in house dust. The concentrations of these two types of pollutant appear to be higher in old houses (timber houses) than in the brick ones. In contrast, the concentrations of PAHs were noticed to be higher in brick houses than in the timber ones. Other factors such as floor level, and distance from the main street and industrial area, also affected the concentrations of pollutants in the house dust samples. To apportion the sources and to understand mechanisms of pollutants, Positive Matrix Factorisation (PMF) receptor model was applied. The results showed that there were significant correlations between the degree of concentration of contaminants in house dust and the physical characteristics of houses, such as the age and the type of the house, the distance from the main road and industrial areas, and smoking. Sources of pollutants were identified. For PAHs, the sources were cooking activities, vehicle emissions, smoking, oil fumes, natural gas combustion and traces of diesel exhaust emissions; for pesticides the sources were application of pesticides for controlling termites in buildings and fences, treating indoor furniture and in gardens for controlling pests attacking horticultural and ornamental plants; for elements the sources were soil, cooking, smoking, paints, pesticides, combustion of motor fuels, residual fuel oil, motor vehicle emissions, wearing down of brake linings and industrial activities.
Resumo:
Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
Acoustic emission (AE) is the phenomenon where stress waves are generated due to rapid release of energy within a material caused by sources such as crack initiation or growth. AE technique involves recording the stress waves by means of sensors and subsequent analysis of the recorded signals to gather information about the nature of the source. Though AE technique is one of the popular non destructive evaluation (NDE) techniques for structural health monitoring of mechanical, aerospace and civil structures; several challenges still exist in successful application of this technique. Presence of spurious noise signals can mask genuine damage‐related AE signals; hence a major challenge identified is finding ways to discriminate signals from different sources. Analysis of parameters of recorded AE signals, comparison of amplitudes of AE wave modes and investigation of uniqueness of recorded AE signals have been mentioned as possible criteria for source differentiation. This paper reviews common approaches currently in use for source discrimination, particularly focusing on structural health monitoring of civil engineering structural components such as beams; and further investigates the applications of some of these methods by analyzing AE data from laboratory tests.