883 resultados para Agent-based systems
Resumo:
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.
Resumo:
The purpose of this research is to develop an optimal kernel which would be used in a real-time engineering and communications system. Since the application is a real-time system, relevant real-time issues are studied in conjunction with kernel related issues. The emphasis of the research is the development of a kernel which would not only adhere to the criteria of a real-time environment, namely determinism and performance, but also provide the flexibility and portability associated with non-real-time environments. The essence of the research is to study how the features found in non-real-time systems could be applied to the real-time system in order to generate an optimal kernel which would provide flexibility and architecture independence while maintaining the performance needed by most of the engineering applications. Traditionally, development of real-time kernels has been done using assembly language. By utilizing the powerful constructs of the C language, a real-time kernel was developed which addressed the goals of flexibility and portability while still meeting the real-time criteria. The implementation of the kernel is carried out using the powerful 68010/20/30/40 microprocessor based systems.
Resumo:
The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive
Resumo:
The software product line engineering brings advantages when compared with the traditional software development regarding the mass customization of the system components. However, there are scenarios that to maintain separated clones of a software system seems to be an easier and more flexible approach to manage their variabilities of a software product line. This dissertation evaluates qualitatively an approach that aims to support the reconciliation of functionalities between cloned systems. The analyzed approach is based on mining data about the issues and source code of evolved cloned web systems. The next step is to process the merge conflicts collected by the approach and not indicated by traditional control version systems to identify potential integration problems from the cloned software systems. The results of the study show the feasibility of the approach to perform a systematic characterization and analysis of merge conflicts for large-scale web-based systems.
Resumo:
Many tracking algorithms have difficulties dealing with occlusions and background clutters, and consequently don't converge to an appropriate solution. Tracking based on the mean shift algorithm has shown robust performance in many circumstances but still fails e.g. when encountering dramatic intensity or colour changes in a pre-defined neighbourhood. In this paper, we present a robust tracking algorithm that integrates the advantages of mean shift tracking with those of tracking local invariant features. These features are integrated into the mean shift formulation so that tracking is performed based both on mean shift and feature probability distributions, coupled with an expectation maximisation scheme. Experimental results show robust tracking performance on a series of complicated real image sequences. © 2010 IEEE.
Resumo:
This review discusses synthesis of enantiopure sulfoxides through the asymmetric oxidation of prochiral sulfides. The use of metal complexes to promote asymmetric sulfoxidation is described in detail, with a particular emphasis on the synthesis of biologically active sulfoxides. The use of non-metal-based systems, such as oxaziridines, chiral hydroperoxides and peracids, as well as enzyme-catalyzed sulfoxidations is also examined.
Resumo:
The ability to capture human motion allows researchers to evaluate an individual’s gait. Gait can be measured in different ways, from camera-based systems to Magnetic and Inertial Measurement Units (MIMU). The former uses cameras to track positional information of photo-reflective markers, while the latter uses accelerometers, gyroscopes, and magnetometers to measure segment orientation. Both systems can be used to measure joint kinematics, but the results vary because of their differences in anatomical calibrations. The objective of this thesis was to study potential solutions for reducing joint angle discrepancies between MIMU and camera-based systems. The first study worked to correct the anatomical frame differences between MIMU and camera-based systems via the joint angles of both systems. This study looked at full lower body correction versus correcting a single joint. Single joint correction showed slightly better alignment of both systems, but does not take into account that body segments are generally affected by more than one joint. The second study explores the possibility of anatomical landmarking using a single camera and a pointer apparatus. Results showed anatomical landmark position could be determined using a single camera, as the anatomical landmarks found from this study and a camera-based system showed similar results. This thesis worked on providing a novel way for obtaining anatomical landmarks with a single point-and-shoot camera, as well aligning anatomical frames between MIMUs and camera-based systems using joint angles.
Resumo:
The presentation made at the conference addressed the issue of linkages between performance information and innovation within the Canadian federal government1. This is a three‐part paper prepared as background to that presentation. • Part I provides an overview of three main sources of performance information - results-based systems, program evaluation, and centrally driven review exercises – and reviews the Canadian experience with them. • Part II identifies and discusses a number of innovation issues that are common to the literature reviewed for this paper. • Part III examines actual and potential linkages between innovation and performance information. This section suggests that innovation in the Canadian federal government tends to cluster into two groups: smaller initiatives driven by staff or middle management; and much larger projects involving major programs, whole departments or whole-of-government. Readily available data on smaller innovation projects is skimpy but suggests that performance information does not play a major role in stimulating these initiatives. In contrast, two of the examples of large-scale innovation show that performance information plays a critical role at all stages. The paper concludes by supporting the contention of others writing on this topic: that more research is needed on innovation, particularly on its link to performance information. In that context, other conclusions drawn in this paper are tentative but suggest that the quality of performance information is as important for innovation as it is for performance management. However, innovation is likely to require its own particular performance information that may not be generated on a routine basis for purposes of performance management, particularly in the early stages of innovation. And, while the availability of performance information can be an important success factor in innovation, it does not stand alone. The commonality of a number of other factors identified in the literature surveyed for this paper strongly suggests that equal if not greater priority needs to be given to attenuating factors that inhibit innovation and to nurturing incentives.
Resumo:
The agent-based social simulation component of the TELL ME project (WP4) developed prototype software to assist communications planners to understand the complex relationships between communication, personal protective behaviour and epidemic spread. Using the simulation, planners can enter different potential communications plans, and see their simulated effect on attitudes, behaviour and the consequent effect on an influenza epidemic.
The model and the software to run the model are both freely available (see section 2.2.1 for instructions on how to obtain the relevant files). This report provides the documentation for the prototype software. The major component is the user guide (Section 2). This provides instructions on how to set up the software, some training scenarios to become familiar with the model operation and use, and details about the model controls and output.
The model contains many parameters. Default values and their source are described at Section 3. These are unlikely to be suitable for all countries, and may also need to be changed as new research is conducted. Instructions for how to customise these values are also included (see section 3.5).
The final technical reference contains two parts. The first is a guide for advanced users who wish to run multiple simulations and analyse the results (section 4.1). The second is to orient programmers who wish to adapt or extend the simulation model (section 4.2). This material is not suitable for general users.
Resumo:
Literature describing the notion and practice of business models has grown considerably over the last few years. Innovative business models appear in every sector of the economy challenging traditional ways of creating and capturing value. However, research describing the theoretical foundations of the field is scarce and many questions still remain. This article examines business models promoting various aspects of sustainable development and tests the explanatory power of two theoretical approaches, namely the resource based view of the firm and transaction cost theory regarding their emergence and successful market performance. Through the examples of industrial ecology and the sharing economy the author shows that a sharp reduction of transaction costs (e.g. in the form of internet based systems) coupled with resources widely available but not utilised before may result in fast growing new markets. This research also provides evidence regarding the notion that these two theoretical approaches can complement each other in explaining corporate behaviour.
Resumo:
The use of tabletop technology continues to grow in the restaurant industry, and this study identifies the strengths and weakness of the technology, how it influences customers, and how it can improve the bottom line for managers and business owners. Results from two studies involving a full-service casual dining chain show that dining time was significantly reduced among patrons who used the tabletop hardware to order or pay for their meals, as was the time required for servers to meet the needs of customers. Also, those who used the devices to order a meal tended to spend more than those who did not. Patrons across the industry have embraced guest-facing technology, such as online reservation systems, mobile apps, payment apps, and tablet-based systems, and may in fact look for such technology when deciding where to dine. Guests’ reactions have been overwhelmingly positive, with 70 to 80 percent of consumers citing the benefits of guest-facing technology and applications. The introduction of tabletop technology in the full-service segment has been slower than in quick-service restaurants (QSRs), and guests cite online reservation systems, online ordering, and tableside payment as preferred technologies. Restaurant operators have also cited benefits of guest-facing technology, for example, the use of electronic ordering, which led to increased sales as such systems can induce the purchase of more expensive menu items and side dishes while allowing managers to store order and payment information for future transactions. Researchers have also noted the cost of the technology and potential problems with integration into other systems as two main factors blocking adoption.
Resumo:
User behaviour is a significant determinant of a product’s environmental impact; while engineering advances permit increased efficiency of product operation, the user’s decisions and habits ultimately have a major effect on the energy or other resources used by the product. There is thus a need to change users’ behaviour. A range of design techniques developed in diverse contexts suggest opportunities for engineers, designers and other stakeholders working in the field of sustainable innovation to affect users’ behaviour at the point of interaction with the product or system, in effect ‘making the user more efficient’. Approaches to changing users’ behaviour from a number of fields are reviewed and discussed, including: strategic design of affordances and behaviour-shaping constraints to control or affect energyor other resource-using interactions; the use of different kinds of feedback and persuasive technology techniques to encourage or guide users to reduce their environmental impact; and context-based systems which use feedback to adjust their behaviour to run at optimum efficiency and reduce the opportunity for user-affected inefficiency. Example implementations in the sustainable engineering and ecodesign field are suggested and discussed.
Resumo:
Economic losses resulting from disease development can be reduced by accurate and early detection of plant pathogens. Early detection can provide the grower with useful information on optimal crop rotation patterns, varietal selections, appropriate control measures, harvest date and post harvest handling. Classical methods for the isolation of pathogens are commonly used only after disease symptoms. This frequently results in a delay in application of control measures at potentially important periods in crop production. This paper describes the application of both antibody and DNA based systems to monitor infection risk of air and soil borne fungal pathogens and the use of this information with mathematical models describing risk of disease associated with environmental parameters.
Resumo:
Individuals and corporate users are persistently considering cloud adoption due to its significant benefits compared to traditional computing environments. The data and applications in the cloud are stored in an environment that is separated, managed and maintained externally to the organisation. Therefore, it is essential for cloud providers to demonstrate and implement adequate security practices to protect the data and processes put under their stewardship. Security transparency in the cloud is likely to become the core theme that underpins the systematic disclosure of security designs and practices that enhance customer confidence in using cloud service and deployment models. In this paper, we present a framework that enables a detailed analysis of security transparency for cloud based systems. In particular, we consider security transparency from three different levels of abstraction, i.e., conceptual, organisation and technical levels, and identify the relevant concepts within these levels. This allows us to provide an elaboration of the essential concepts at the core of transparency and analyse the means for implementing them from a technical perspective. Finally, an example from a real world migration context is given to provide a solid discussion on the applicability of the proposed framework.