931 resultados para many-objective problems
Resumo:
Tese (doutorado)—Universidade de Brasília, Instituto de Geociências, Pós-Graduação em Geociências Aplicadas, 2016.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
The objective of the following article is to give the reader a perspective on the events of 1956 in Hungary – known commonly as the 1956 Hungarian Uprising or Revolution (or as it is sometimes called in Hungary: the 1956 Revolution and Freedom Fight) – from the point of view of counterinsurgency theory. The author intends to show that the current theory has many problems and shortcomings when it comes to the analysis of the events of 1956 because of the unique set of circumstances which prevailed at that time.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
A seleção deste tema deve-se ao fato de para além de ser um tema bastante interessante, o fato de possuir também uma perspetiva futura muito interessante, sobretudo quando contextualizado como solução de problemas tais como dependência energética, poluição e excesso de população. Apesar de muitos dos conceitos ligados ao tema não serem assim tão recentes, a verdade é que os recentes avanços tecnológicos, (informática e robótica), associados a novas ideologias, (preservação do meio ambiente e energias renováveis), e a diferentes conjunturas, nomeadamente a crise económica de 2009, trouxe uma nova perspetiva sobre o tema e sobre as suas potencialidades. Consultando a Internet é possível reunir varias informações sobre como construir com inteligência passando por temáticas como os edifícios Inteligentes, a sustentabilidade ou até mesmo os serviços de saúde em casa, contudo ainda é algo difícil perceber qual a importância que estes aspetos podem vir a desempenhar no futuro. Isto advém em grande medida, da própria dificuldade dos autores em concordar sobre o que realmente é um edifício inteligente e qual pode ser a sua importância para as cidades no futuro. Desta forma o objetivo deste trabalho passa por abordar os conceitos base do que são edifícios inteligentes desde a sua génese até ao presente, não só a partir do ponto de vista tecnológico como também do ponto de vista ambiental e da forma como estes se articulam, principalmente num período de dificuldade económica como o vivido nos dias de hoje. Hoje em dia torna-se demasiado redutor só pensar os edifícios inteligentes a partir da sua componente tecnológica uma vez que a sua adaptabilidade permite que sejam uma solução para um número muito maior de problemas, alguns dos quais onde a componente tecnológica perde o seu lugar de destaque para a componente ambiental, onde um design inteligente pode potenciar grandes melhorias com o mínimo de gastos económicos. Em conclusão um edifício Inteligente é nos dias de hoje muito mais que a soma das suas partes seja ela tecnológica, económica ou ambiental. A “verdadeira” Inteligência está em conjugar o melhor de cada vertente potenciando o que têm de melhor de forma a proporcionarem uma qualidade de vida acrescida ao utilizador sem comprometer o meio e sem implicar gastos económicos incomportáveis.
Resumo:
As estruturas dentárias são revestidas pelo esmalte dentário. O esmalte é um tecido de alta dureza, avascular e predominantemente branco. No entanto, distingue-se dos outros tecidos mineralizados do corpo pela sua incapacidade de remodelação. Devido a esse facto qualquer alteração que ocorra, quer ao longo da vida, quer no seu desenvolvimento fica, permanentemente, registada (Seow, 1997). Procurou-se nesta monografia aprofundar os conhecimentos sobre os mais comuns defeitos de desenvolvimento do esmalte existentes, assim como o respetivo tratamento. Para a realização desta monografia foram utilizados os seguintes motores de busca B-on, PubMed, Science Direct e Sci-elo, para a realização da pesquisa de informação, aplicando-se um critério de seleção temporal dos últimos 10 anos. As palavras-chaves e combinações de palavras utilizadas nos motores de busca referidos para a realização da pesquisa foram “Enamel”, “Enamel Development”, “Enamel Defects”, “Amelogenisis Imperfecta”, “Hypoplasia”. Dos 300 artigos encontrados nesta pesquisa, foram selecionados 68. O desenvolvimento dos tecidos dentários é um processo complexo conhecido por odontogénese, podendo ser simplisticamente dividido em três fases Fase de Botão, Fase de Capuz e por último a Fase de Campânula (Thesleff et al.,2009) Existem inúmeros defeitos de desenvolvimento do esmalte registados na literatura, não sendo mesmo possível em muitos casos enquadrar indubitavelmente o referido defeito numa categoria, ou até atribuir-lhe uma designação (Seow, 1997). Optou-se pela sua relevância e epidemiologia abordar nesta monografia os seguintes defeitos: Defeitos de desenvolvimento do esmalte; Opacidades; Opacidade difusa; Hipoplasia; Amelogenese imperfeita e todas as suas categorias; Fluorose e manchas por tetraciclinas assim como os seus respectivos tratamentos. Os defeitos de desenvolvimento de esmalte apresentam diversas características próprias e outras semelhantes entre si, verificando-se assim diversas possibilidades de tratamentos a realizar, uns mais invasivos e outros menos, que vão desde microabrasões na superfície do esmalte, à colocação de cerâmicas, dependendo sempre da preferência do paciente e do seu poder socioeconómico (Azevedo DT et al., 2011). Conclui-se que apesar de todos os problemas que acarretam quer a nível estético quer a nível funcional para os indivíduos nos quais não existe uma grande gravidade das lesões esses casos podem ser resolvidos por um Médico Dentista generalista desde que este tenha o conhecimento adequado dos protocolos de atuação.
Resumo:
Human radiosensitivity is a quantitative trait that is generally subject to binomial distribution. Individual radiosensitivity, however, may deviate significantly from the mean (by 2-3 standard deviations). Thus, the same dose of radiation may result in different levels of genotoxic damage (commonly measured as chromosome aberration rates) in different individuals. There is significant genetic component in individual radiosensitivity. It is related to carriership of variant alleles of various single-nucleotide polymorphisms (most of these in genes coding for proteins functioning in DNA damage identification and repair); carriership of different number of alleles producing cumulative effects; amplification of gene copies coding for proteins responsible for radioresistance, mobile genetic elements, and others. Among the other factors influencing individual radioresistance are: radioadaptive response; bystander effect; levels of endogenous substances with radioprotective and antimutagenic properties and environmental factors such as lifestyle and diet, physical activity, psychoemotional state, hormonal state, certain drugs, infections and others. These factors may have radioprotective or sensibilising effects. Apparently, there are too many factors that may significantly modulate the biological effects of ionising radiation. Thus, conventional methodologies for biodosimetry (specifically, cytogenetic methods) may produce significant errors if personal traits that may affect radioresistance are not accounted for.
Resumo:
Dans certains milieux syndicaux québécois, des initiatives porteuses destinées à prévenir les problèmes de santé mentale au travail ont vu le jour. Des représentants syndicaux pionniers ont mis en place des structures d’entraide opérantes, obtenu des jurisprudences importantes et développé des approches innovantes pour corriger ou bonifier l’organisation du travail, et ce depuis plus de trois décennies. Alors que la montée de l’idéologie néolibérale et les principes d’organisation du travail qu’elle sous-tend engendrent une intensification du travail qui fragilise la psyché des travailleurs et que le rapport de force des syndicats s’effrite, il apparaît porteur d’interroger l’expérience de ces représentants syndicaux pour mieux comprendre comment se structure l’action syndicale en santé mentale au travail. Cette thèse fait l’étude de réalisations syndicales québécoises en matière de santé mentale au travail visant à prévenir et à corriger les problèmes de détresse psychologique, d’épuisement professionnel, de harcèlement, de dépression, de violence, de suicides reliés au travail, etc. Pour ce faire, un cadre théorique mixte a été utilisé. D’une part, une perspective large a été adoptée pour comprendre les enjeux entourant les rapports humains au travail et l’action. Pour ce faire, quatre auteurs influents de la philosophie des Lumières et de la philosophie contemporaine ont été sélectionnés, soit Thomas Hobbes, Adam Smith, Karl Marx et Hannah Arendt. Dégager ces différentes perspectives du monde, de l’action et du lien social avait pour objectif de mettre en place une grille d’analyse susceptible de relier l’expérience de représentants syndicaux à ces visions du monde. Il est apparu essentiel de mieux saisir les bases idéologiques sur lesquelles ces derniers ont construit leur action pour comprendre comment elles ont influencé leur démarche singulière et collective. D’autre part, la théorie de l’expérience sociale a été retenue (Dubet, 2009; 1994) pour analyser plus finement le travail des représentant syndicaux. Celle-ci distingue trois logiques d’action complémentaires et en tension avec lesquelles doivent composer les acteurs sociaux : une logique d’intégration, une logique stratégique et logique appuyée sur la subjectivation. La coexistence de ces trois logiques signifie que l’expérience que les individus font du monde n’est pas une simple reproduction de déterminismes qui les précèdent. Les acteurs sont aussi sujets de leur expérience et capables de prendre une distance du social pour comprendre les significations de leur agir; ils s’inscrivent dans le monde dans une dialectique critique. Cette théorie apporte un éclairage permettant de dégager à la fois ce qui freine et ce qui facilite l’action individuelle et collective en matière de santé mentale au travail et de décrire comment des représentants syndicaux se mobilisent pour répondre aux nombreuses attentes des membres. Cette recherche qualitative s’est appuyée sur une méthodologie de récit de vie (Rhéaume 2008; Bertaux 2006). Vingt représentants syndicaux ont témoigné de la souffrance au travail (Dejours, 2008) vécue par leurs membres et ont présenté des actions déployées pour leur venir en aide. Les réalités décrites par les participants montrent comment certains éléments de l’organisation du travail sont associés à des expériences de domination (Martuccelli, 2004): les méfaits du productivisme et de l’hyperflexibilité, les accidents de travail, les maladies professionnelles et les situations d’horreur au travail, les rapports sociaux au travail devenus délétères et les utilisations abusives de l’appareil judiciaire. L’étude démontre aussi à quel point des initiatives portées par des représentants syndicaux contribuent à une résolution de problèmes dans une perspective d’interdépendance, de développement du pouvoir d’agir, de justice sociale et de lutte pour la dignité. Quatre catégories d’initiatives ont été retenues : l’entretien du lien social dans l’entraide au quotidien, la défense juridique et légale des membres, les clauses de convention collective et les actions sur l’organisation du travail. Enfin, la recherche dégage trois profils de représentants syndicaux : la militance qui tente de former un nous, la professionnalisation qui tente de faire reconnaitre son utilité et sa compétence, et l’entraide qui cherche à développer une action engageant le Je. Leur rencontre laisse entrevoir le développement d’une praxis syndicale qui vise à promouvoir et protéger la dignité du travail et des travailleurs.
Resumo:
El VI Congreso del Partido Comunista de Cuba introdujo una nueva agenda económica que el Gobierno llama la actualización del modelo socialista. Muchos piensan que en esencia se trata de una serie de reformas y reducen su importancia a su dimensión económica. Esta monografía busca explicar la actualización aplicando el análisis de sistemas-mundo de Immanuel Wallerstein, aportando una interpretación no convencional del fenómeno. Se puntualizará en las variables de poder y en los actores políticos que han determinado la nueva política económica: el Partido Comunista de Cuba (PCC) y las Fuerzas Armadas Revolucionarias (FAR). Ambos conforman lo que Wallerstein denomina un movimiento antisitémico. El argumento principal es que el movimiento ha puesto en marcha las reformas buscando fortalecer el Estado y así garantizar su supervivencia al consolidar su posición como el competidor único del poder estatal. Como se verá, estas metas han llevado al movimiento a sacrificar parte de su naturaleza antisistémica.
Resumo:
La actividad física regular desempeña un papel fundamental en la prevención y control de los desórdenes musculo esqueléticos, dentro de la actividad laboral del profesor de educación física. Objetivo: El propósito del estudio fue determinar la relación entre los niveles de actividad física y la prevalencia de los desórdenes musculo esqueléticos, en profesores de educación física de 42 instituciones educativas oficiales de Bogotá-Colombia. Métodos. Se trata de un estudio de corte transversal en 262 profesores de educación física, de 42 instituciones educativas oficiales de Bogotá - Colombia. Se aplicó de manera auto-diligenciada el Cuestionario Nórdico de desórdenes músculos esqueléticos y el Cuestionario IPAQ versión corta para identificar los niveles de actividad física. Se obtuvieron medidas de tendencia central y de dispersión para variables cuantitativas y frecuencias relativas para variables cualitativas. Se calculó la prevalencia de vida y el porcentaje de reubicación laboral en los docentes que habían padecido diferentes tipo de dolor. Para estimar la relación entre el dolor y las variables sociodemográficas de los docentes, se utilizó un modelo de regresión logística binaria simple. Los análisis fueron realizados en SPSS versión 20 y se consideró como significativo un valor p < 0.05 para el contraste de hipótesis y un nivel de confianza para la estimación de parámetros. Resultados: El porcentaje de respuesta fue del 83.9%, se consideraron válidos 262 registros, 22.5% eran de género femenino, la mayor cantidad de docentes de educación física se encuentraon entre 25 y 35 años (43,9%), frente a los desórdenes musculo esqueléticos, el 16.9% de los profesores reporto haberlos sufrido alguna vez molestias en el cuello, el 17,2% en el hombro, 27,9% espalda, 7.93% brazo y en mano el 8.4%. Los profesores con mayores niveles de actividad física, reportaron una prevalencia menor de alteraciones musculo esqueléticas de 16,9 % para cuello; 27.7% para dorsal/lumbar frente a los sujetos con niveles bajos de actividad física. La presencia de los desórdenes se asoció a los años de experiencia (OR 3.39 IC95% 1.41-7.65), a pertenecer al género femenino (OR 4.94 IC95% 1.94-12.59), a la edad (OR 5.06 IC95% 1.25-20.59), y al atender más de 400 estudiantes a cargo dentro de la jornada laboral (OR 4.50 IC95% 1.74-11.62). Conclusiones: En los profesores de Educación Física no sé encontró una relación estadísticamente significativa entre los niveles de actividad física y los desórdenes musculo esqueléticos medidos por auto reporte.
Resumo:
Introduction: human aging is marked by a decrease in the performance of some daily tasks, some even considered banal and imperceptibly when this limitation is followed by chronic diseases, the elderly becomes a source of concern for the family. Objective: identifying the health problems of the elderly living in long-stay institutions from self-reported diseases. This is a descriptive and quantitative study, conducted in northeastern Brazil capital, involving 138 elderly. For data collection we used a questionnaire containing demographic variables, institutional and related to self-reported health problems. Data were evaluated using bivariate analysis and association chi-square. Results: predominance of women was found (61.6%), aged 60-69 years old (39.1%), coming from the state capital (51.4%), and institutional permanence time between 1-5 years (77.5%). The most frequent diseases were related to the cardiovascular system (15.9%) and endocrine, nutritional and metabolic diseases (9.4%). It showed a significant association between self-reported diseases and the age of the elderly (p=0.047). Conclusion: it is expected to raise awareness among health professionals to provide a better assistance to the institutionalized elderly focusing on the real needs of these persons.
Resumo:
Crop monitoring and more generally land use change detection are of primary importance in order to analyze spatio-temporal dynamics and its impacts on environment. This aspect is especially true in such a region as the State of Mato Grosso (south of the Brazilian Amazon Basin) which hosts an intensive pioneer front. Deforestation in this region as often been explained by soybean expansion in the last three decades. Remote sensing techniques may now represent an efficient and objective manner to quantify how crops expansion really represents a factor of deforestation through crop mapping studies. Due to the special characteristics of the soybean productions' farms in Mato Grosso (area varying between 1000 hectares and 40000 hectares and individual fields often bigger than 100 hectares), the Moderate Resolution Imaging Spectroradiometer (MODIS) data with a near daily temporal resolution and 250 m spatial resolution can be considered as adequate resources to crop mapping. Especially, multitemporal vegetation indices (VI) studies have been currently used to realize this task [1] [2]. In this study, 16-days compositions of EVI (MODQ13 product) data are used. However, although these data are already processed, multitemporal VI profiles still remain noisy due to cloudiness (which is extremely frequent in a tropical region such as south Amazon Basin), sensor problems, errors in atmospheric corrections or BRDF effect. Thus, many works tried to develop algorithms that could smooth the multitemporal VI profiles in order to improve further classification. The goal of this study is to compare and test different smoothing algorithms in order to select the one which satisfies better to the demand which is classifying crop classes. Those classes correspond to 6 different agricultural managements observed in Mato Grosso through an intensive field work which resulted in mapping more than 1000 individual fields. The agricultural managements above mentioned are based on combination of soy, cotton, corn, millet and sorghum crops sowed in single or double crop systems. Due to the difficulty in separating certain classes because of too similar agricultural calendars, the classification will be reduced to 3 classes : Cotton (single crop), Soy and cotton (double crop), soy (single or double crop with corn, millet or sorghum). The classification will use training data obtained in the 2005-2006 harvest and then be tested on the 2006-2007 harvest. In a first step, four smoothing techniques are presented and criticized. Those techniques are Best Index Slope Extraction (BISE) [3], Mean Value Iteration (MVI) [4], Weighted Least Squares (WLS) [5] and Savitzky-Golay Filter (SG) [6] [7]. These techniques are then implemented and visually compared on a few individual pixels so that it allows doing a first selection between the five studied techniques. The WLS and SG techniques are selected according to criteria proposed by [8]. Those criteria are: ability in eliminating frequent noises, conserving the upper values of the VI profiles and keeping the temporality of the profiles. Those selected algorithms are then programmed and applied to the MODIS/TERRA EVI data (16-days composition periods). Tests of separability are realized based on the Jeffries-Matusita distance in order to see if the algorithms managed in improving the potential of differentiation between the classes. Those tests are realized on the overall profile (comprising 23 MODIS images) as well as on each MODIS sub-period of the profile [1]. This last test is a double interest process because it allows comparing the smoothing techniques and also enables to select a set of images which carries more information on the separability between the classes. Those selected dates can then be used to realize a supervised classification. Here three different classifiers are tested to evaluate if the smoothing techniques as a particular effect on the classification depending on the classifiers used. Those classifiers are Maximum Likelihood classifier, Spectral Angle Mapper (SAM) classifier and CHAID Improved Decision tree. It appears through the separability tests on the overall process that the smoothed profiles don't improve efficiently the potential of discrimination between classes when compared with the original data. However, the same tests realized on the MODIS sub-periods show better results obtained with the smoothed algorithms. The results of the classification confirm this first analyze. The Kappa coefficients are always better with the smoothing techniques and the results obtained with the WLS and SG smoothed profiles are nearly equal. However, the results are different depending on the classifier used. The impact of the smoothing algorithms is much better while using the decision tree model. Indeed, it allows a gain of 0.1 in the Kappa coefficient. While using the Maximum Likelihood end SAM models, the gain remains positive but is much lower (Kappa improved of 0.02 only). Thus, this work's aim is to prove the utility in smoothing the VI profiles in order to improve the final results. However, the choice of the smoothing algorithm has to be made considering the original data used and the classifier models used. In that case the Savitzky-Golay filter gave the better results.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.