981 resultados para spatially varying object pixel density
Resumo:
Light adaptation is crucial for coping with the varying levels of ambient light. Using high-density electroencephalography (EEG), we investigated how adaptation to light of different colors affects brain responsiveness. In a within-subject design, sixteen young participants were adapted first to dim white light and then to blue, green, red, or white bright light (one color per session in a randomized order). Immediately after both dim and bright light adaptation, we presented brief light pulses and recorded event-related potentials (ERPs). We analyzed ERP response strengths and brain topographies and determined the underlying sources using electrical source imaging. Between 150 and 261ms after stimulus onset, the global field power (GFP) was higher after dim than bright light adaptation. This effect was most pronounced with red light and localized in the frontal lobe, the fusiform gyrus, the occipital lobe and the cerebellum. After bright light adaptation, within the first 100ms after light onset, stronger responses were found than after dim light adaptation for all colors except for red light. Differences between conditions were localized in the frontal lobe, the cingulate gyrus, and the cerebellum. These results indicate that very short-term EEG brain responses are influenced by prior light adaptation and the spectral quality of the light stimulus. We show that the early EEG responses are differently affected by adaptation to different colors of light which may contribute to known differences in performance and reaction times in cognitive tests.
Resumo:
We propose new methods for evaluating predictive densities. The methods includeKolmogorov-Smirnov and Cram?r-von Mises-type tests for the correct specification ofpredictive densities robust to dynamic mis-specification. The novelty is that the testscan detect mis-specification in the predictive densities even if it appears only overa fraction of the sample, due to the presence of instabilities. Our results indicatethat our tests are well sized and have good power in detecting mis-specification inpredictive densities, even when it is time-varying. An application to density forecastsof the Survey of Professional Forecasters demonstrates the usefulness of the proposedmethodologies.
Resumo:
The aim of this study was to characterize gas exchange responses of young cashew plants to varying photosynthetic photon flux density (PPFD), temperature, vapor-pressure deficit (VPD), and intercellular CO2 concentration (Ci), under controlled conditions. Daily courses of gas exchange and chlorophyll a fluorescence parameters were measured under natural conditions. Maximum CO2 assimilation rates, under optimal controlled conditions, were about 13 mmol m-2 s-1 , with light saturation around 1,000 mmol m-2 s-1. Leaf temperatures between 25ºC and 35ºC were optimal for photosynthesis. Stomata showed sensitivity to CO2, and a closing response with increasing Ci. Increasing VPD had a small effect on CO2 assimilation rates, with a small decrease above 2.5 kPa. Stomata, however, were strongly affected by VPD, exhibiting gradual closure above 1.5 kPa. The reduced stomatal conductances at high VPD were efficient in restricting water losses by transpiration, demonstrating the species adaptability to dry environments. Under natural irradiance, CO2 assimilation rates were saturated in early morning, following thereafter the PPFD changes. Transient Fv/Fm decreases were registered around 11h, indicating the occurrence of photoinhibition. Decreases of excitation capture efficiency, decreases of effective quantum yield of photosystem II, and increases in non-photochemical quenching were consistent with the occurrence of photoprotection under excessive irradiance levels.
Resumo:
Activating mutations in the K-Ras small GTPase are extensively found in human tumors. Although these mutations induce the generation of a constitutively GTP-loaded, active form of K-Ras, phosphorylation at Ser181 within the C-terminal hypervariable region can modulate oncogenic K-Ras function without affecting the in vitro affinity for its effector Raf-1. In striking contrast, K-Ras phosphorylated at Ser181 shows increased interaction in cells with the active form of Raf-1 and with p110α, the catalytic subunit of PI 3-kinase. Because the majority of phosphorylated K-Ras is located at the plasma membrane, different localization within this membrane according to the phosphorylation status was explored. Density-gradient fractionation of the plasma membrane in the absence of detergents showed segregation of K-Ras mutants that carry a phosphomimetic or unphosphorylatable serine residue (S181D or S181A, respectively). Moreover, statistical analysis of immunoelectron microscopy showed that both phosphorylation mutants form distinct nanoclusters that do not overlap. Finally, induction of oncogenic K-Ras phosphorylation - by activation of protein kinase C (PKC) - increased its co-clustering with the phosphomimetic K-Ras mutant, whereas (when PKC is inhibited) non-phosphorylated oncogenic K-Ras clusters with the non-phosphorylatable K-Ras mutant. Most interestingly, PI 3-kinase (p110α) was found in phosphorylated K-Ras nanoclusters but not in non-phosphorylated K-Ras nanoclusters. In conclusion, our data provide - for the first time - evidence that PKC-dependent phosphorylation of oncogenic K-Ras induced its segregation in spatially distinct nanoclusters at the plasma membrane that, in turn, favor activation of Raf-1 and PI 3-kinase.
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
We determined the influence of fasting (FAST) and feeding (FED) on cholesteryl ester (CE) flow between high-density lipoproteins (HDL) and plasma apoB-lipoprotein and triacylglycerol (TG)-rich emulsions (EM) prepared with TG-fatty acids (FAs). TG-FAs of varying chain lengths and degrees of unsaturation were tested in the presence of a plasma fraction at d > 1.21 g/mL as the source of CE transfer protein. The transfer of CE from HDL to FED was greater than to FAST TG-rich acceptor lipoproteins, 18% and 14%, respectively. However, percent CE transfer from HDL to apoB-containing lipoproteins was similar for FED and FAST HDL. The CE transfer from HDL to EM depended on the EM TG-FA chain length. Furthermore, the chain length of the monounsaturated TG-containing EM showed a significant positive correlation of the CE transfer from HDL to EM (r = 0.81, P < 0.0001) and a negative correlation from EM to HDL (r = -041, P = 0.0088). Regarding the degree of EM TG-FAs unsaturation, among EMs containing C18, the CE transfer was lower from HDL to C18:2 compared to C18:1 and C18:3, 17.7%, 20.7%, and 20%, respectively. However, the CE transfer from EMs to HDL was higher to C18:2 than to C18:1 and C18:3, 83.7%, 51.2%, and 46.3%, respectively. Thus, the EM FA composition was found to be the rate-limiting factor regulating the transfer of CE from HDL. Consequently, the net transfer of CE between HDL and TG-rich particles depends on the specific arrangement of the TG acyl chains in the lipoprotein particle core.
Resumo:
The gypsy moth, Lymantria dispar, a major defoliator of broad leaf trees, was accidentally introduced into North America in 1869. Much interest has been generated regarding the potential of using natural pathogens for biological control of this insect. One of these pathogens, a highly specific fungus, Entomophaga maimaiga, was accredited with causing major epizootics in populations of gypsy moth across the north-eastern United States in 1989 and 1990 and is thought to be spreading northwards into Canada. This study examined gypsy moth population densities in the Niagara Region. The fungus, .E.. maimaiga, was artificially introduced into one site and the resulting mortality in host populations was noted over two years. The relationship between fungal mortality, host population density and occurrence of another pathogen, the nuclear polyhedrosis virus (NPV), was assessed. Gypsy moth population density was assessed by counting egg masses in 0.01 hectare (ha) study plots in six areas, namely Louth, Queenston, Niagara-on-the-Lake, Shorthills Provincial Park, Chippawa Creek and Willoughby Marsh. High variability in density was seen among sites. Willoughby Marsh and Chippawa Creek, the sites with the greatest variability, were selected for more intensive study. The pathogenicity of E. maimaiga was established in laboratory trials. Fungal-infected gypsy moth larvae were then released into experimental plots of varying host density in Willoughby Marsh in 1992. These larvae served as the inoculum to infect field larvae. Other larvae were injected with culture medium only and released into control plots also of varying host density. Later, field larvae were collected and assessed for the presence of .E.. maimaiga and NPV. A greater proportion of larvae were infected from experimental plots than from control plots indicating that the experimental augmentation had been successful. There was no relationship between host density and the proportion of infected larvae in either experimental or control plots. In 1992, 86% of larvae were positive for NPV. Presence and intensity of NPV infection was independent of fungal presence, plot type or interaction of these two factors. Sampling was carried out in the summer of 1993, the year after the introduction, to evaluate the persistence of the pathogen in the environment. Almost 50% of all larvae were infected with the fungus. There was no difference between control and experimental plots. Data collected from Willoughby Marsh indicated that there was no correlation between the proportion of larvae infected with the fungus and host population density in either experimental or control plots. About 10% of larvae collected from a nearby site, Chippawa Creek, were also positive for .E.. maimaiga suggesting that low levels of .E.. maimaiga probably occurred naturally in the area. In 1993, 9.6% of larvae were positive for NPV. Again, presence or absence of NPV infection was independent of fungal presence plot type or interaction of these two factors. In conclusion, gypsy moth population densities were highly variable between and within sites in the Niagara Region. The introduction of the pathogenic fungus, .E.. maimaiga, into Willoughby Marsh in 1992 was successful and the fungus was again evident in 1993. There was no evidence for existence of a relationship between fungal mortality and gypsy moth density or occurrence of NPV. The results from this study are discussed with respect to the use of .E.. maimaiga in gypsy moth management programs.
Resumo:
Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements.
Resumo:
Les nanotechnologies appliquées aux sciences pharmaceutiques ont pour but d’améliorer l’administration de molécules actives par l’intermédiaire de transporteurs nanométriques. Parmi les différents types de véhicules proposés pour atteindre ce but, on retrouve les nanoparticules polymériques (NP) constituées de copolymères “en bloc”. Ces copolymères permettent à la fois l’encapsulation de molécules actives et confèrent à la particule certaines propriétés de surface (dont l’hydrophilicité) nécessaires à ses interactions avec les milieux biologiques. L’architecture retenue pour ces copolymères est une structure constituée le plus fréquemment de blocs hydrophiles de poly(éthylène glycol) (PEG) associés de façon linéaire à des blocs hydrophobes de type polyesters. Le PEG est le polymère de choix pour conférer une couronne hydrophile aux NPs et son l’efficacité est directement liée à son organisation et sa densité de surface. Néanmoins, malgré les succès limités en clinique de ces copolymères linéaires, peu de travaux se sont attardés à explorer les effets sur la structure des NPs d’architectures alternatives, tels que les copolymères en peigne ou en brosse. Durant ce travail, plusieurs stratégies ont été mises au point pour la synthèse de copolymères en peigne, possédant un squelette polymérique polyesters-co-éther et des chaines de PEG liées sur les groupes pendants disponibles (groupement hydroxyle ou alcyne). Dans la première partie de ce travail, des réactions d’estérification par acylation et de couplage sur des groupes pendants alcool ont permis le greffage de chaîne de PEG. Cette méthode génère des copolymères en peigne (PEG-g-PLA) possédant de 5 à 50% en poids de PEG, en faisant varier le nombre de chaînes branchées sur un squelette de poly(lactique) (PLA). Les propriétés structurales des NPs produites ont été étudiées par DLS, mesure de charge et MET. Une transition critique se situant autour de 15% de PEG (poids/poids) est observée avec un changement de morphologie, d’une particule solide à une particule molle (“nanoagrégat polymére”). La méthode de greffage ainsi que l’addition probable de chaine de PEG en bout de chaîne principale semblent également avoir un rôle dans les changements observés. L’organisation des chaînes de PEG-g-PLA à la surface a été étudiée par RMN et XPS, méthodes permettant de quantifier la densité de surface en chaînes de PEG. Ainsi deux propriétés clés que sont la résistance à l’agrégation en conditions saline ainsi que la résistance à la liaison aux protéines (étudiée par isothermes d’adsorption et microcalorimétrie) ont été reliées à la densité de surface de PEG et à l’architecture des polymères. Dans une seconde partie de ce travail, le greffage des chaînes de PEG a été réalisé de façon directe par cyclo-adition catalysée par le cuivre de mPEG-N3 sur les groupes pendants alcyne. Cette nouvelle stratégie a été pensée dans le but de comprendre la contribution possible des chaines de PEG greffées à l’extrémité de la chaine de PLA. Cette librairie de PEG-g-PLA, en plus d’être composée de PEG-g-PLA avec différentes densités de greffage, comporte des PEG-g-PLA avec des PEG de différent poids moléculaire (750, 2000 et 5000). Les chaines de PEG sont seulement greffées sur les groupes pendants. Les NPs ont été produites par différentes méthodes de nanoprécipitation, incluant la nanoprécipitation « flash » et une méthode en microfluidique. Plusieurs variables de formulation telles que la concentration du polymère et la vitesse de mélange ont été étudiées afin d’observer leur effet sur les caractéristiques structurales et de surface des NPs. Les tailles et les potentiels de charges sont peu affectés par le contenu en PEG (% poids/poids) et la longueur des chaînes de PEG. Les images de MET montrent des objets sphériques solides et l'on n’observe pas d’objets de type agrégat polymériques, malgré des contenus en PEG comparable à la première bibliothèque de polymère. Une explication possible est l’absence sur ces copolymères en peigne de chaine de PEG greffée en bout de la chaîne principale. Comme attendu, les tailles diminuent avec la concentration du polymère dans la phase organique et avec la diminution du temps de mélange des deux phases, pour les différentes méthodes de préparation. Finalement, la densité de surface des chaînes de PEG a été quantifiée par RMN du proton et XPS et ne dépendent pas de la méthode de préparation. Dans la troisième partie de ce travail, nous avons étudié le rôle de l’architecture du polymère sur les propriétés d’encapsulation et de libération de la curcumine. La curcumine a été choisie comme modèle dans le but de développer une plateforme de livraison de molécules actives pour traiter les maladies du système nerveux central impliquant le stress oxydatif. Les NPs chargées en curcumine, montrent la même transition de taille et de morphologie lorsque le contenu en PEG dépasse 15% (poids/poids). Le taux de chargement en molécule active, l’efficacité de changement et les cinétiques de libérations ainsi que les coefficients de diffusion de la curcumine montrent une dépendance à l’architecture des polymères. Les NPs ne présentent pas de toxicité et n’induisent pas de stress oxydatif lorsque testés in vitro sur une lignée cellulaire neuronale. En revanche, les NPs chargées en curcumine préviennent le stress oxydatif induit dans ces cellules neuronales. La magnitude de cet effet est reliée à l’architecture du polymère et à l’organisation de la NP. En résumé, ce travail a permis de mettre en évidence quelques propriétés intéressantes des copolymères en peigne et la relation intime entre l’architecture des polymères et les propriétés physico-chimiques des NPs. De plus les résultats obtenus permettent de proposer de nouvelles approches pour le design des nanotransporteurs polymériques de molécules actives.
Resumo:
As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination
Resumo:
Upgrading two widely used standard plastics, polypropylene (PP) and high density polyethylene (HDPE), and generating a variety of useful engineering materials based on these blends have been the main objective of this study. Upgradation was effected by using nanomodifiers and/or fibrous modifiers. PP and HDPE were selected for modification due to their attractive inherent properties and wide spectrum of use. Blending is the engineered method of producing new materials with tailor made properties. It has the advantages of both the materials. PP has high tensile and flexural strength and the HDPE acts as an impact modifier in the resultant blend. Hence an optimized blend of PP and HDPE was selected as the matrix material for upgradation. Nanokaolinite clay and E-glass fibre were chosen for modifying PP/HDPE blend. As the first stage of the work, the mechanical, thermal, morphological, rheological, dynamic mechanical and crystallization characteristics of the polymer nanocomposites prepared with PP/HDPE blend and different surface modified nanokaolinite clay were analyzed. As the second stage of the work, the effect of simultaneous inclusion of nanokaolinite clay (both N100A and N100) and short glass fibres are investigated. The presence of nanofiller has increased the properties of hybrid composites to a greater extent than micro composites. As the last stage, micromechanical modeling of both nano and hybrid A composite is carried out to analyze the behavior of the composite under load bearing conditions. These theoretical analyses indicate that the polymer-nanoclay interfacial characteristics partially converge to a state of perfect interfacial bonding (Takayanagi model) with an iso-stress (Reuss IROM) response. In the case of hybrid composites the experimental data follows the trend of Halpin-Tsai model. This implies that matrix and filler experience varying amount of strain and interfacial adhesion between filler and matrix and also between the two fillers which play a vital role in determining the modulus of the hybrid composites.A significant observation from this study is that the requirement of higher fibre loading for efficient reinforcement of polymers can be substantially reduced by the presence of nanofiller together with much lower fibre content in the composite. Hybrid composites with both nanokaolinite clay and micron sized E-glass fibre as reinforcements in PP/HDPE matrix will generate a novel class of high performance, cost effective engineering material.
Resumo:
Background: The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped. ----- Methods: Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offspring’s sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets. ----- Results: Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams. ----- Conclusions: Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited.
Resumo:
This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
La visió és probablement el nostre sentit més dominant a partir del qual derivem la majoria d'informació del món que ens envolta. A través de la visió podem percebre com són les coses, on són i com es mouen. En les imatges que percebem amb el nostre sistema de visió podem extreure'n característiques com el color, la textura i la forma, i gràcies a aquesta informació som capaços de reconèixer objectes fins i tot quan s'observen sota unes condicions totalment diferents. Per exemple, som capaços de distingir un mateix objecte si l'observem des de diferents punts de vista, distància, condicions d'il·luminació, etc. La Visió per Computador intenta emular el sistema de visió humà mitjançant un sistema de captura d'imatges, un ordinador, i un conjunt de programes. L'objectiu desitjat no és altre que desenvolupar un sistema que pugui entendre una imatge d'una manera similar com ho realitzaria una persona. Aquesta tesi es centra en l'anàlisi de la textura per tal de realitzar el reconeixement de superfícies. La motivació principal és resoldre el problema de la classificació de superfícies texturades quan han estat capturades sota diferents condicions, com ara distància de la càmera o direcció de la il·luminació. D'aquesta forma s'aconsegueix reduir els errors de classificació provocats per aquests canvis en les condicions de captura. En aquest treball es presenta detalladament un sistema de reconeixement de textures que ens permet classificar imatges de diferents superfícies capturades en diferents condicions. El sistema proposat es basa en un model 3D de la superfície (que inclou informació de color i forma) obtingut mitjançant la tècnica coneguda com a 4-Source Colour Photometric Stereo (CPS). Aquesta informació és utilitzada posteriorment per un mètode de predicció de textures amb l'objectiu de generar noves imatges 2D de les textures sota unes noves condicions. Aquestes imatges virtuals que es generen seran la base del nostre sistema de reconeixement, ja que seran utilitzades com a models de referència per al nostre classificador de textures. El sistema de reconeixement proposat combina les Matrius de Co-ocurrència per a l'extracció de característiques de textura, amb la utilització del Classificador del veí més proper. Aquest classificador ens permet al mateix temps aproximar la direcció d'il·luminació present en les imatges que s'utilitzen per testejar el sistema de reconeixement. És a dir, serem capaços de predir l'angle d'il·luminació sota el qual han estat capturades les imatges de test. Els resultats obtinguts en els diferents experiments que s'han realitzat demostren la viabilitat del sistema de predicció de textures, així com del sistema de reconeixement.