823 resultados para Graph-based approach


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An efficient synthetic approach leading to introduction of the hydroxymethyl group to an aryl moiety via combination of the Bouveault formylation and hydride reduction has been optimized using a rational, mechanistic-based approach. This approach enabled telescoping of the two steps into a single efficient process, readily amenable to scaleup.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a normative view on corporate reputation strategic management. Reputation performance is conceptualised as the outcome of complex processes and social interactions and the lack of a holistic reputation performance management framework is identified. In an attempt to fill this gap, a portfolio-based approach is put forward. Drawing on the foundations of modern portfolio theory we create a portfolio-based reputation management algorithmic model where reputation components and priorities are weighted by decision makers and shape organisational change in an attempt to formulate a corporate reputation strategy. The rationale of this paper is based on the foundational consideration of organisations as choosing he optimal strategy by seeking to maximise their reputation performance while maintaining organisational stability and minimising organisational risk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a normative view on corporate reputation management; an algorithmic model for reputation-driven strategic decision making is proposed and corporate reputation is conceptualized as influenced by a selection among organizational priorities. A portfolio-based approach is put forward; we draw on the foundations of portfolio theory and we create a portfolio-based reputation management model where reputation components and priorities are weighted by decision makers and shape organizational change in an attempt to formulate a corporate reputation strategy. The rationale of this paper is based on the foundational consideration of organizations as choosing the optimal strategy by seeking to maximize performance on corporate reputation capital while maintaining organizational stability and minimizing organizational risk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a novel skeleton-based approach to gait recognition using our Skeleton Variance Image. The core of our approach consists of employing the screened Poisson equation to construct a family of smooth distance functions associated with a given shape. The screened Poisson distance function approximation nicely absorbs and is relatively stable to shape boundary perturbations which allows us to define a rough shape skeleton. We demonstrate how our Skeleton Variance Image is a powerful gait cycle descriptor leading to a significant improvement over the existing state of the art gait recognition rate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This review discusses the potential application of bacterial viruses (phage therapy) towards the eradication of antibiotic resistant Pseudomonas aeruginosa in children with cystic fibrosis (CF). In this regard, several potential relationships between bacteria and their bacteriophages are considered. The most important aspect that must be addressed with respect to phage therapy of bacterial infections in the lungs of CF patients is in ensuring the continuity of treatment in light of the continual occurrence of resistant bacteria. This depends on the ability to rapidly select phages exhibiting an enhanced spectrum of lytic activity among several well-studied phage groups of proven safety. We propose a modular based approach, utilizing both mono-species and hetero-species phage mixtures. With an approach involving the visual recognition of characteristics exhibited by phages of well studied phage groups on lawns of the standard P. aeruginosa PAO1 strain, the simple and rapid enhancement of the lytic spectrum of cocktails is permitted, allowing the development of tailored preparations for patients capable of circumventing problems associated with phage resistant bacterial mutants.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’augmentation de la croissance des réseaux, des blogs et des utilisateurs des sites d’examen sociaux font d’Internet une énorme source de données, en particulier sur la façon dont les gens pensent, sentent et agissent envers différentes questions. Ces jours-ci, les opinions des gens jouent un rôle important dans la politique, l’industrie, l’éducation, etc. Alors, les gouvernements, les grandes et petites industries, les instituts universitaires, les entreprises et les individus cherchent à étudier des techniques automatiques fin d’extraire les informations dont ils ont besoin dans les larges volumes de données. L’analyse des sentiments est une véritable réponse à ce besoin. Elle est une application de traitement du langage naturel et linguistique informatique qui se compose de techniques de pointe telles que l’apprentissage machine et les modèles de langue pour capturer les évaluations positives, négatives ou neutre, avec ou sans leur force, dans des texte brut. Dans ce mémoire, nous étudions une approche basée sur les cas pour l’analyse des sentiments au niveau des documents. Notre approche basée sur les cas génère un classificateur binaire qui utilise un ensemble de documents classifies, et cinq lexiques de sentiments différents pour extraire la polarité sur les scores correspondants aux commentaires. Puisque l’analyse des sentiments est en soi une tâche dépendante du domaine qui rend le travail difficile et coûteux, nous appliquons une approche «cross domain» en basant notre classificateur sur les six différents domaines au lieu de le limiter à un seul domaine. Pour améliorer la précision de la classification, nous ajoutons la détection de la négation comme une partie de notre algorithme. En outre, pour améliorer la performance de notre approche, quelques modifications innovantes sont appliquées. Il est intéressant de mentionner que notre approche ouvre la voie à nouveaux développements en ajoutant plus de lexiques de sentiment et ensembles de données à l’avenir.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the vein of the "Education for All" campaign to promote access to education, a wave of curriculum revision along the competency-based approach has swept francophone countries in sub-Sahara Africa, thus Benin. The current study documents local actors' various interactions with the curricular reform in the course of its implementation. Secondary data supplemented with qualitative research techniques such as semi-structured interviews with teachers, and focus group discussions with parents enable to relate the patterns of change, the challenges and resistance to change. The actors spectrum generated illustrates advocacy on one hand and resistance on the other. Advocacy of local actors reflects the global optimistic discourse on education and resistance is favoured by disappointing policy outcomes as well as contextual constraints. (DIPF/Orig.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software protection is an essential aspect of information security to withstand malicious activities on software, and preserving software assets. However, software developers still lacks a methodology for the assessment of the deployed protections. To solve these issues, we present a novel attack simulation based software protection assessment method to assess and compare various protection solutions. Our solution relies on Petri Nets to specify and visualize attack models, and we developed a Monte Carlo based approach to simulate attacking processes and to deal with uncertainty. Then, based on this simulation and estimation, a novel protection comparison model is proposed to compare different protection solutions. Lastly, our attack simulation based software protection assessment method is presented. We illustrate our method by means of a software protection assessment process to demonstrate that our approach can provide a suitable software protection assessment for developers and software companies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model Driven based approach for Service Evolution in Clouds will mainly focus on the reusable evolution patterns' advantage to solve evolution problems. During the process, evolution pattern will be driven by MDA models to pattern aspects. Weaving the aspects into service based process by using Aspect-Oriented extended BPEL engine at runtime will be the dynamic feature of the evolution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.