974 resultados para Limited Sampling Strategy
Resumo:
We propose a restoration algorithm for band limited images that considers irregular(perturbed) sampling, denoising, and deconvolution. We explore the application of a family ofregularizers that allow to control the spectral behavior of the solution combined with the irregular toregular sampling algorithms proposed by H.G. Feichtinger, K. Gr¨ochenig, M. Rauth and T. Strohmer.Moreover, the constraints given by the image acquisition model are incorporated as a set of localconstraints. And the analysis of such constraints leads to an early stopping rule meant to improvethe speed of the algorithm. Finally we present experiments focused on the restoration of satellite images, where the micro-vibrations are responsible of the type of distortions we are considering here. We will compare results of the proposed method with previous methods and show an extension tozoom.
Resumo:
This cross-sectional study aimed to analyze the adherence to drug and non-drug treatments in 17 Family Health Strategy units. A total of 423 patients with type 2 diabetes mellitus were selected through stratified random sampling in Family Health Strategy units of a city in the state of Minas Gerais, Brazil, in 2010. The results showed that the prevalence rate of adherence to drug therapy was higher than 60% in the 17 units investigated; in relation to physical activity, adherence was higher than 60% in 58.8% units; and for the diet plan, there was no adherence in 52.9% units. Therefore, we concluded that adherence to drug therapy in most units was high and the practice of physical activity was heterogeneous, and in relation to diet adherence, it was low in all units. We recommend strengthening of institutional guidelines and educational strategies, in line with SUS guidelines, so that, professionals may face the challenges imposed by the lack of adherence.
Resumo:
The population-genetic consequences of monogamy and male philopatry (a rare breeding system in mammals) were investigated using microsatellite markers in the semisocial and anthropophilic shrew Crocidura russula. A hierarchical sampling design over a 16-km geographical transect revealed a large genetic diversity (h = 0.813) with significant differentiation among subpopulations (F-ST = 5-6%), which suggests an exchange of 4.4 migrants per generation. Demic effective-size estimates were very high, due both to this limited gene inflow and to the inner structure of subpopulations. These were made of 13-20 smaller units (breeding groups), comprising an estimate of four breeding pairs each. Members of the same breeding groups displayed significant coancestries (F-LS = 9-10%), which was essentially due to strong male kinship: syntopic males were on average related at the half-sib level. Female dispersal among breeding groups was not complete (similar to 39%), and insufficient to prevent inbreeding. From our results, the breeding strategy of C. russula seems less efficient than classical mammalian systems (polygyny and male dispersal) in disentangling coancestry from inbreeding, but more so in retaining genetic variance.
Resumo:
It is well-known nowadays that soil variability can influence crop yields. Therefore, to determine specific areas of soil management, we studied the Pearson and spatial correlations of rice grain yield with organic matter content and pH of an Oxisol (Typic Acrustox) under no- tillage, in the 2009/10 growing season, in Selvíria, State of Mato Grosso do Sul, in the Brazilian Cerrado (longitude 51º24' 21'' W, latitude 20º20' 56'' S). The upland rice cultivar IAC 202 was used as test plant. A geostatistical grid was installed for soil and plant data collection, with 120 sampling points in an area of 3.0 ha with a homogeneous slope of 0.055 m m-1. The properties rice grain yield and organic matter content, pH and potential acidity and aluminum content were analyzed in the 0-0.10 and 0.10-0.20 m soil layers. Spatially, two specific areas of agricultural land management were discriminated, differing in the value of organic matter and rice grain yield, respectively with fertilization at variable rates in the second zone, a substantial increase in agricultural productivity can be obtained. The organic matter content was confirmed as a good indicator of soil quality, when spatially correlated with rice grain yield.
Resumo:
Captan and folpet are two fungicides largely used in agriculture, but biomonitoring data are mostly limited to measurements of captan metabolite concentrations in spot urine samples of workers, which complicate interpretation of results in terms of internal dose estimation, daily variations according to tasks performed, and most plausible routes of exposure. This study aimed at performing repeated biological measurements of exposure to captan and folpet in field workers (i) to better assess internal dose along with main routes-of-entry according to tasks and (ii) to establish most appropriate sampling and analysis strategies. The detailed urinary excretion time courses of specific and non-specific biomarkers of exposure to captan and folpet were established in tree farmers (n = 2) and grape growers (n = 3) over a typical workweek (seven consecutive days), including spraying and harvest activities. The impact of the expression of urinary measurements [excretion rate values adjusted or not for creatinine or cumulative amounts over given time periods (8, 12, and 24 h)] was evaluated. Absorbed doses and main routes-of-entry were then estimated from the 24-h cumulative urinary amounts through the use of a kinetic model. The time courses showed that exposure levels were higher during spraying than harvest activities. Model simulations also suggest a limited absorption in the studied workers and an exposure mostly through the dermal route. It further pointed out the advantage of expressing biomarker values in terms of body weight-adjusted amounts in repeated 24-h urine collections as compared to concentrations or excretion rates in spot samples, without the necessity for creatinine corrections.
Resumo:
Invasive aspergillosis (IA) is a life-threatening infection due to Aspergillus fumigatus and other Aspergillus spp. Drugs targeting the fungal cell membrane (triazoles, amphotericin B) or cell wall (echinocandins) are currently the sole therapeutic options against IA. Their limited efficacy and the emergence of resistance warrant the identification of new antifungal targets. Histone deacetylases (HDACs) are enzymes responsible of the deacetylation of lysine residues of core histones, thus controlling chromatin remodeling and transcriptional activation. HDACs also control the acetylation and activation status of multiple non-histone proteins, including the heat shock protein 90 (Hsp90), an essential molecular chaperone for fungal virulence and antifungal resistance. This review provides an overview of the different HDACs in Aspergillus spp. as well as their respective contribution to total HDAC activity, fungal growth, stress responses, and virulence. The potential of HDAC inhibitors, currently under development for cancer therapy, as novel alternative antifungal agents against IA is discussed.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
Modeling the mechanisms that determine how humans and other agents choose among different behavioral and cognitive processes-be they strategies, routines, actions, or operators-represents a paramount theoretical stumbling block across disciplines, ranging from the cognitive and decision sciences to economics, biology, and machine learning. By using the cognitive and decision sciences as a case study, we provide an introduction to what is also known as the strategy selection problem. First, we explain why many researchers assume humans and other animals to come equipped with a repertoire of behavioral and cognitive processes. Second, we expose three descriptive, predictive, and prescriptive challenges that are common to all disciplines which aim to model the choice among these processes. Third, we give an overview of different approaches to strategy selection. These include cost‐benefit, ecological, learning, memory, unified, connectionist, sequential sampling, and maximization approaches. We conclude by pointing to opportunities for future research and by stressing that the selection problem is far from being resolved.
Resumo:
We study the impact of sampling theorems on the fidelity of sparse image reconstruction on the sphere. We discuss how a reduction in the number of samples required to represent all information content of a band-limited signal acts to improve the fidelity of sparse image reconstruction, through both the dimensionality and sparsity of signals. To demonstrate this result, we consider a simple inpainting problem on the sphere and consider images sparse in the magnitude of their gradient. We develop a framework for total variation inpainting on the sphere, including fast methods to render the inpainting problem computationally feasible at high resolution. Recently a new sampling theorem on the sphere was developed, reducing the required number of samples by a factor of two for equiangular sampling schemes. Through numerical simulations, we verify the enhanced fidelity of sparse image reconstruction due to the more efficient sampling of the sphere provided by the new sampling theorem.
Resumo:
The establishment of legislative rules about explosives in the eighties has reduced the illicit use of military and civilian explosives. However, bomb-makers have rapidly taken advantage of substances easily accessible and intended for licit uses to produce their own explosives. This change in strategy has given rise to an increase of improvised explosive charges, which is moreover assisted by the ease of implementation of the recipes, widely available through open sources. While the nature of the explosive charges has evolved, instrumental methods currently used in routine, although more sensitive than before, have a limited power of discrimination and allow mostly the determination of the chemical nature of the substance. Isotope ratio mass spectrometry (IRMS) has been applied to a wide range of forensic materials. Conclusions drawn from the majority of the studies stress its high power of discrimination. Preliminary studies conducted so far on the isotopic analysis of intact explosives (pre-blast) have shown that samples with the same chemical composition and coming from different sources could be differentiated. The measurement of stable isotope ratios appears therefore as a new and remarkable analytical tool for the discrimination or the identification of a substance with a definite source. However, much research is still needed to assess the validity of the results in order to use them either in an operational prospect or in court. Through the isotopic study of black powders and ammonium nitrates, this research aims at evaluating the contribution of isotope ratio mass spectrometry to the investigation of explosives, both from a pre-blast and from a post-blast approach. More specifically, the goal of the research is to provide additional elements necessary to a valid interpretation of the results, when used in explosives investigation. This work includes a fundamental study on the variability of the isotopic profile of black powder and ammonium nitrate in both space and time. On one hand, the inter-variability between manufacturers and, particularly, the intra-variability within a manufacturer has been studied. On the other hand, the stability of the isotopic profile over time has been evaluated through the aging of these substances exposed to different environmental conditions. The second part of this project considers the applicability of this high-precision technology to traces and residues of explosives, taking account of the characteristics specific to the field, including their sampling, a probable isotopic fractionation during the explosion, and the interferences with the matrix of the site.
Resumo:
Es va realitzar una sèrie d'assaigs d'adobat nitrogenat en diferents comarques de la Catalunya interior. En el conjunt d'aquests assaigs es varen comprovar tres mètodes diferents que es va considerar que eren prometedors per tal de millorar la fertilització nitrogenada. Els mètodes assajats eren el mètode del balanç de nitrogen, el del nitrogen mineral i el del contingut de nitrats al suc de la base de les tiges (CNSBT). Els sòls on es van realitzar els assaigs no presentaven cap limitació especial per al cultiu del blat i eren profunds, ben drenats, no salins i de textura mitjana; l'única excepció era un assaig sobre sòl moderadament profund. Per tant, i també pel que fa a la fertilitat química, els sòls s'han de considerar d'un potencial productiu mitjàalt. El mètode del balanç de nitrogen s'ha mostrat com a molt prometedor de cara a definir si cal la magnitud de l'adobat de cobertora per a les condicions estudiades. El mètode de nitrogen mineral també ha estat efectiu en aquest sentit, mentre que el del CNSBT s'ha revelat com a no aplicable en les condicions assajades, on en molts casos l'aigua és també factor limitant. Al llarg dels assaigs s'han identificat un seguit de factors que impedeixen ajustar la fertilitat nitrogenada. Entre aquests cal esmentar la mala estimació de la producció objectiu, la dificultat de predir el N disponible a partir dels adobs orgànics, dificultats de mostreig pel nitrogen nítric i l'efecte crític que té l'erràtica disponibilitat d'aigua que complica molt l'estratègia de fertilització nitrogenada a adoptar.
Resumo:
NlmCategory="UNASSIGNED">This Perspective discusses the pertinence of variable dosing regimens with anti-vascular endothelial growth factor (VEGF) for neovascular age-related macular degeneration (nAMD) with regard to real-life requirements. After the initial pivotal trials of anti-VEGF therapy, the variable dosing regimens pro re nata (PRN), Treat-and-Extend, and Observe-and-Plan, a recently introduced regimen, aimed to optimize the anti-VEGF treatment strategy for nAMD. The PRN regimen showed good visual results but requires monthly monitoring visits and can therefore be difficult to implement. Moreover, application of the PRN regimen revealed inferior results in real-life circumstances due to problems with resource allocation. The Treat-and-Extend regimen uses an interval based approach and has become widely accepted for its ease of preplanning and the reduced number of office visits required. The parallel development of the Observe-and-Plan regimen demonstrated that the future need for retreatment (interval) could be reliably predicted. Studies investigating the observe-and-plan regimen also showed that this could be used in individualized fixed treatment plans, allowing for dramatically reduced clinical burden and good outcomes, thus meeting the real life requirements. This progressive development of variable dosing regimens is a response to the real-life circumstances of limited human, technical, and financial resources. This includes an individualized treatment approach, optimization of the number of retreatments, a minimal number of monitoring visits, and ease of planning ahead. The Observe-and-Plan regimen achieves this goal with good functional results. Translational Relevance: This perspective reviews the process from the pivotal clinical trials to the development of treatment regimens which are adjusted to real life requirements. The article discusses this translational process which- although not the classical interpretation of translation from fundamental to clinical research, but a subsequent process after the pivotal clinical trials - represents an important translational step from the clinical proof of efficacy to optimization in terms of patients' and clinics' needs. The related scientific procedure includes the exploration of the concept, evaluation of security, and finally proof of efficacy.
Strategic alliances as an international entry strategy: Finnish cleantech SMEs and the Indian market
Resumo:
The demand for environmental technologies, also called cleantech, is growing globally but the need is especially high in emerging markets such as India where the rising economy and rapid industrialisation have led to increasing energy needs and environmental degradation. The market is of great potential also for the Finnish cleantech cluster that represents advanced expertise in several fields of environmental technologies. However, most of the Finnish companies in the field are SMEs that face challenges in their internationalisation due to their limited resources. The objective of this study was to estimate, whether strategic alliances could be an efficient entry strategy for Finnish cleantech SMEs entering the Indian market. This was done by studying what are the key factors influencing the international entry mode decision of Finnish cleantech SMEs, what are the major factors affecting the entry of Finnish cleantech SMEs to the Indian market and how do Finnish cleantech SMEs use strategic alliances in their internationalisation process. The study was realised as a qualitative multi-case study through theme interviews of Finnish cleantech SME representatives. The results indicated that Finnish cleantech SMEs prefer to enter international markets through non-equity and collaborative modes of entry. These entry modes are chosen because of the small size and limited resources of companies, but also because they want to protect their innovative technologies from property rights violations. India is an attracting market for Finnish cleantech SMEs mainly because of its size and growth, but insufficient environmental regulation and high import tariffs have hindered entry to the market. Finnish cleantech SMEs commonly use strategic alliances in their internationalisation process but the use is rather one-sided. Most of the formed strategic alliances are low-commitment, international contractual agreement in sales and distribution. Alliance partner selection receives less attention. In the future, providing Finnish cleantech SMEs with international experience and training could help in diversifying the use of strategic alliances and increase their benefits to SME internationalisation.
Resumo:
La quantité de données générée dans le cadre d'étude à grande échelle du réseau d'interaction protéine-protéine dépasse notre capacité à les analyser et à comprendre leur sens; d'une part, par leur complexité et leur volume, et d'un autre part, par la qualité du jeu de donnée produit qui semble bondé de faux positifs et de faux négatifs. Cette dissertation décrit une nouvelle méthode de criblage des interactions physique entre protéines à haut débit chez Saccharomyces cerevisiae, la complémentation de fragments protéiques (PCA). Cette approche est accomplie dans des cellules intactes dans les conditions natives des protéines; sous leur promoteur endogène et dans le respect des contextes de modifications post-traductionnelles et de localisations subcellulaires. Une application biologique de cette méthode a permis de démontrer la capacité de ce système rapporteur à répondre aux questions d'adaptation cellulaire à des stress, comme la famine en nutriments et un traitement à une drogue. Dans le premier chapitre de cette dissertation, nous avons présenté un criblage des paires d'interactions entre les protéines résultant des quelques 6000 cadres de lecture de Saccharomyces cerevisiae. Nous avons identifié 2770 interactions entre 1124 protéines. Nous avons estimé la qualité de notre criblage en le comparant à d'autres banques d'interaction. Nous avons réalisé que la majorité de nos interactions sont nouvelles, alors que le chevauchement avec les données des autres méthodes est large. Nous avons pris cette opportunité pour caractériser les facteurs déterminants dans la détection d'une interaction par PCA. Nous avons remarqué que notre approche est sous une contrainte stérique provenant de la nécessité des fragments rapporteurs à pouvoir se rejoindre dans l'espace cellulaire afin de récupérer l'activité observable de la sonde d'interaction. L'intégration de nos résultats aux connaissances des dynamiques de régulations génétiques et des modifications protéiques nous dirigera vers une meilleure compréhension des processus cellulaires complexes orchestrés aux niveaux moléculaires et structuraux dans les cellules vivantes. Nous avons appliqué notre méthode aux réarrangements dynamiques opérant durant l'adaptation de la cellule à des stress, comme la famine en nutriments et le traitement à une drogue. Cette investigation fait le détail de notre second chapitre. Nous avons déterminé de cette manière que l'équilibre entre les formes phosphorylées et déphosphorylées de l'arginine méthyltransférase de Saccharomyces cerevisiae, Hmt1, régulait du même coup sont assemblage en hexamère et son activité enzymatique. L'activité d'Hmt1 a directement un impact dans la progression du cycle cellulaire durant un stress, stabilisant les transcrits de CLB2 et permettant la synthèse de Cln3p. Nous avons utilisé notre criblage afin de déterminer les régulateurs de la phosphorylation d'Hmt1 dans un contexte de traitement à la rapamycin, un inhibiteur de la kinase cible de la rapamycin (TOR). Nous avons identifié la sous-unité catalytique de la phosphatase PP2a, Pph22, activé par l'inhibition de la kinase TOR et la kinase Dbf2, activé durant l'entrée en mitose de la cellule, comme la phosphatase et la kinase responsable de la modification d'Hmt1 et de ses fonctions de régulations dans le cycle cellulaire. Cette approche peut être généralisée afin d'identifier et de lier mécanistiquement les gènes, incluant ceux n'ayant aucune fonction connue, à tout processus cellulaire, comme les mécanismes régulant l'ARNm.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.