899 resultados para Computacional Intelligence in Medecine
Resumo:
The prevailing hypercompetitive environment has made it essential for organizations to gather competitive intelligence from environmental scanning. The knowledge gained leads to organizational learning, which stimulates increased patent productivity. This paper highlights five practices that aid in developing patenting intelligence and empirically verifies to what extent this organizational learning leads to knowledge gains and financial gains realized from consequent higher patent productivity. The model is validated based on the perceptions of professionals with patenting experience from two of the most aggressively patenting sectors in today’s economy, viz., IT and pharmaceutical sectors (n=119). The key finding of our study suggests that although organizational learning from environmental scanning exists, the application of this knowledge for increasing patent productivity lacks due appreciation. This missing link in strategic analysis and strategy implementation has serious implications for managers which are briefly discussed in this paper.
Resumo:
This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time-series analysis of satellite images utilizing pixel spectral information for image clustering and region based segmentation for extracting water covered regions. MODIS satellite images are analyzed at two stages: before flood and during flood. Multi-temporal MODIS images are processed in two steps. In the first step, clustering algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are used to distinguish the water regions from the non-water based on spectral information. These algorithms are chosen since they are quite efficient in solving multi-modal optimization problems. These classified images are then segmented using spatial features of the water region to extract the river. From the results obtained, we evaluate the performance of the methods and conclude that incorporating region based image segmentation along with clustering algorithms provides accurate and reliable approach for the extraction of water covered region.
Resumo:
with the development of large scale wireless networks, there has been short comings and limitations in traditional network topology management systems. In this paper, an adaptive algorithm is proposed to maintain topology of hybrid wireless superstore network by considering the transactions and individual network load. The adaptations include to choose the best network connection for the response, and to perform network Connection switching when network situation changes. At the same time, in terms of the design for topology management systems, aiming at intelligence, real-time, the study makes a step-by-step argument and research on the overall topology management scheme. Architecture for the adaptive topology management of hybrid wireless networking resources is available to user’s mobile device. Simulation results describes that the new scheme has outperformed the original topology management and it is simpler than the original rate borrowing scheme.
Resumo:
Functions are important in designing. However, several issues hinder progress with the understanding and usage of functions: lack of a clear and overarching definition of function, lack of overall justifications for the inevitability of the multiple views of function, and scarcity of systematic attempts to relate these views with one another. To help resolve these, the objectives of this research are to propose a common definition of function that underlies the multiple views in literature and to identify and validate the views of function that are logically justified to be present in designing. Function is defined as a change intended by designers between two scenarios: before and after the introduction of the design. A framework is proposed that comprises the above definition of function and an empirically validated model of designing, extended generate, evaluate, modify, and select of state-change, and an action, part, phenomenon, input, organ, and effect model of causality (Known as GEMS of SAPPhIRE), comprising the views of activity, outcome, requirement-solution-information, and system-environment. The framework is used to identify the logically possible views of function in the context of designing and is validated by comparing these with the views of function in the literature. Describing the different views of function using the proposed framework should enable comparisons and determine relationships among the various views, leading to better understanding and usage of functions in designing.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.
Resumo:
We consider the problem of Probably Ap-proximate Correct (PAC) learning of a bi-nary classifier from noisy labeled exam-ples acquired from multiple annotators(each characterized by a respective clas-sification noise rate). First, we consider the complete information scenario, where the learner knows the noise rates of all the annotators. For this scenario, we derive sample complexity bound for the Mini-mum Disagreement Algorithm (MDA) on the number of labeled examples to be ob-tained from each annotator. Next, we consider the incomplete information sce-nario, where each annotator is strategic and holds the respective noise rate as a private information. For this scenario, we design a cost optimal procurement auc-tion mechanism along the lines of Myer-son’s optimal auction design framework in a non-trivial manner. This mechanism satisfies incentive compatibility property,thereby facilitating the learner to elicit true noise rates of all the annotators.
Resumo:
Internal analogies are created if the knowledge of source domain is obtained only from the cognition of designers. In this paper, an understanding of the use of internal analogies in conceptual design is developed by studying: the types of internal analogies; the roles of internal analogies; the influence of design problems on the creation of internal analogies; the role of experience of designers on the use of internal analogies; the levels of abstraction at which internal analogies are searched in target domain, identified in source domain, and realized in the target domain; and the effect of internal analogies from the natural and artificial domains on the solution space created using these analogies. To facilitate this understanding, empirical studies of design sessions from earlier research, each involving a designer solving a design problem by identifying requirements and developing conceptual solutions, without using any support, are used. The following are the important findings: designers use analogies from the natural and artificial domains; analogies are used for generating requirements and solutions; the nature of the design problem influences the use of analogies; the role of experience of designers on the use of analogies is not clearly ascertained; analogical transfer is observed only at few levels of abstraction while many levels remain unexplored; and analogies from the natural domain seem to have more positive influence than the artificial domain on the number of ideas and variety of idea space.
Resumo:
We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.
Resumo:
A novel framework is provided for very fast model-based reinforcement learning in continuous state and action spaces. It requires probabilistic models that explicitly characterize their levels of condence. Within the framework, exible, non-parametric models are used to describe the world based on previously collected experience. It demonstrates learning on the cart-pole problem in a setting where very limited prior knowledge about the task has been provided. Learning progressed rapidly, and a good policy found after only a small number of iterations.
Resumo:
This work seeks to address those questions and evaluate other international experiences and experiments designed to achieve the same ends. The book is based on a study of two particular cases where parliamentary bodies designed and implemented participatory digital processes, namely, the e-Democracy Program developed by the Brazilian House of Representatives, and the Virtual Senator Program developed by the Chilean Senate. The text unfolds in the form of a systematic analysis of institutional aspects embracing political and organizational elements as well as the social aspects associated to the application of digital democracy in parliaments. The investigation shows that at the stage they found themselves in 2010 those projects had only brought in very incipient results in regard to the aspects of enhancing representativity in decision making processes, aggregating collective intelligence to the legislative process or transparency to parliamentary performances, even though all of those are precious components of any democracy that deems itself to be participatory and deliberative. Nevertheless, such experiences have had the merit of contributing towards the gradual construction of more effective participatory mechanisms, complementary to the political representation system in place.