957 resultados para Non ideal dynamic system
Resumo:
[EN]Ensemble forecasting [1] is a methodology to deal with uncertainties in the numerical wind prediction. In this work we propose to apply ensemble methods to the adaptive wind forecasting model presented in [2]. The wind _eld forecasting is based on a mass-consistent model and a log-linear wind pro_le using as input data the resulting forecast wind from Harmonie [3], a Non-Hydrostatic Dynamic model. The mass-consistent model parameters are estimated by using genetic algorithms [4]. The mesh is generated using the meccano method [5] and adapted to the geometry. The main source of uncertainties in this model is the parameter estimation and the in- trinsic uncertainties of the Harmonie Model…
Resumo:
[EN]Ensemble forecasting is a methodology to deal with uncertainties in the numerical wind prediction. In this work we propose to apply ensemble methods to the adaptive wind forecasting model presented in. The wind field forecasting is based on a mass-consistent model and a log-linear wind profile using as input data the resulting forecast wind from Harmonie, a Non-Hydrostatic Dynamic model used experimentally at AEMET with promising results. The mass-consistent model parameters are estimated by using genetic algorithms. The mesh is generated using the meccano method and adapted to the geometry…
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.
Resumo:
The present work provides an ex-post assessment of the UK 5-a-day information campaign where the positive effects of information on consumption levels are disentangled from the potentially conflicting price dynamics. A model-based estimate of the counterfactual (no-intervention) scenario is computed using data from the Expenditure and Food Survey between 2002 and 2006. For this purpose fruit and vegetable demand is modelled employing Quadratic Almost Ideal Demand System (QUAIDS) specification with demographic effects and controlling for potential endogeneity of prices and total food expenditure.
Resumo:
In Drosophila the steroid hormone ecdysone regulates a wide range of developmental and physiological responses, including reproduction, embryogenesis, postembryonic development and metamorphosis. Drosophila provides an excellent system to address some fundamental questions linked to hormone actions. In fact, the apparent relative simplicity of its hormone signaling pathways taken together with well-established genetic and genomic tools developed to this purpose, defines this insect as an ideal model system for studying the molecular mechanisms through which steroid hormones act. During my PhD research program I’ve analyzed the role of ecdysone signaling to gain insight into the molecular mechanisms through which the hormone fulfills its pleiotropic functions in two different developmental stages: the oogenesis and the imaginal wing disc morphogenesis. To this purpose, I performed a reverse genetic analysis to silence the function of two different genes involved in ecdysone signaling pathway, EcR and ecd.
Resumo:
Negli ultimi anni la ricerca ha fatto grandi passi avanti riguardo ai metodi di progetto e realizzazione delle strutture portanti degli edifici, a tal punto da renderle fortemente sicure sotto tutti i punti di vista. La nuova frontiera della ricerca sta quindi virando su aspetti che non erano mai stati in primo piano finora: gli elementi non-strutturali. Considerati fino ad oggi semplicemente carico accessorio, ci si rende sempre più conto della loro capacità di influire sui comportamenti delle strutture e sulla sicurezza di chi le occupa. Da qui nasce l’esigenza di questo grande progetto chiamato BNCs (Building Non-structural Component System), ideato dall’Università della California - San Diego e sponsorizzato dalle maggiori industrie impegnate nel campo delle costruzioni. Questo progetto, a cui ho preso parte, ha effettuato test su tavola vibrante di un edificio di cinque piani in scala reale, completamente arredato ed allestito dei più svariati elementi non-strutturali. Lo scopo della tesi in questione, ovviamente, riguarda l’identificazione strutturale e la verifica della sicurezza di uno di questi elementi non-strutturali: precisamente la torre di raffreddamento posta sul tetto dell’edificio (del peso di circa 3 tonnellate). Partendo da una verifica delle regole e calcoli di progetto, si è passato ad una fase di test sismici ed ispezioni post-test della torre stessa, infine tramite l’analisi dei dati raccolti durante i test e si è arrivati alla stesura di conclusioni.
Resumo:
„Photovernetzbare flüssigkristalline Polymere unterschiedlicher Kettentopologien“, Patrick Beyer, Mainz 2007 Zusammenfassung In der vorliegenden Arbeit wurde die Synthese und Charakterisierung flüssigkristalliner Elastomere unterschiedlicher Polymertopologien vorgestellt. Dabei wurden Systeme synthetisiert, bei denen die mesogenen Einheiten entweder als Seitengruppen an ein Polymerrückgrat angebunden (Seitenkettenelastomere) oder direkt in die Polymerkette integriert (Hauptkettenelastomere) sind (siehe Abbildung). Bezüglich der Seitenkettensysteme konnten erstmals photovernetzbare smektische Seitenkettenpolymere, in denen aufgrund der Anknüpfung eines photoisomerisierbaren Azobenzols eine Photo- modulation der ferroelektrischen Eigenschaften möglich ist, dargestellt werden. Homöotrop orientierte freistehende Filme dieser Materialien konnten durch Spincoaten dargestellt und unter Ausnutzung des Dichroismus der Azobenzole durch geeignete Wahl der Bestrahlungsgeometrie photovernetzt werden. Aufbauend auf diesen Untersuchungen wurde anhand eines nicht vernetzbaren Modellsystems im Detail der Einfluss der trans-cis Isomerisierung des Azobenzols auf die ferroelektrischen Parameter untersucht. Durch zeitaufgelöste Messungen der Absorption der Azobenzole, der spontanen Polarisation und des Direktorneigungswinkels und Auswertung der kinetischen Prozesse konnte eine lineare Abhängigkeit der ferroelektrischen Eigenschaften vom Grad der Isomerisierungsreaktion festgestellt werden. Durch Vergleich dieser in der flüssigkristallinen Phase erhaltenen Ergebnisse mit der Kinetik der thermischen Reisomerisierung in Lösung (Toluol) konnte ferner eine deutliche Reduzierung der Relaxationszeiten in der anisotropen flüssigkristallinen Umgebung festgestellt und auf eine Absenkung der Aktivierungsenergie zurückgeführt werden. Makroskopische Formänderungen der Seitenkettenelastomere am Phasenübergang von der flüssigkristallinen in die isotrope Phase konnten jedoch nicht festgestellt werden. Aus diesem Grund wurden neue Synthesestrategien für die Darstellung von Hauptkettenelastomeren entwickelt, die sich aufgrund der direkten Kopplung von flüssigkristallinem Ordnungsgrad und Polymerkettenkonformation besser für die Herstellung thermischer Aktuatoren eignen. Auf Basis flüssigkristalliner Polymalonate konnten dabei lateral funktionalisierte smektische Hauptkettenpolymere synthetisiert werden, welche erstmals die Darstellung von LC-Hauptkettenelastomeren durch Photovernetzung in der flüssigkristallinen Phase erlauben. Durch laterale Bromierung konnte in diesen Systemen die Kristallisationstendenz der verwendeten Biphenyleinheiten unterdrückt werden. Bezüglich der Photovernetzung konnten zwei neue Synthesemethoden entwickelt werden, bei denen der Vernetzungsschritt entweder durch radikalische Polymerisation lateral angebundener Acrylatgruppen oder durch photoaktive Benzophenongruppen erfolgte. Basierend auf den Benzophenon funktionalisierten Systemen konnte ein neuartiges Verfahren zur Darstellung makroskopisch orientierter Hauptkettenelastomere durch Photovernetzung entwickelt werden. Die Elastomerproben, deren Ordnungsgrad durch Röntgenuntersuchungen ermittelt werden konnte, zeigen am Phasenübergang von der flüssigkristallinen in die isotrope Phase eine reversible Formänderung von 40%. Im Gegensatz zu anderen bekannten smektischen Systemen konnten die in dieser Arbeit vorgestellten Elastomere ohne Zerstörung der Phase bis zu 60% entlang der smektischen Schichtnormalen gestreckt werden, was im Kontext einer geringen Korrelation der smektischen Schichten in Hauptkettenelastomeren diskutiert wurde.
Resumo:
Bisher ist bei forensischen Untersuchungen von Explosionen die Rückverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstört wird. Die Rückverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen können somit ausgelesen werden und liefern der Polizei bei der Aufklärung weitere Ansätze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewählten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Möglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Veränderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die Überprüfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) für die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhängigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit höherer Temperaturbeständigkeit in größeren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbeständigkeit in kleineren Partikeln impliziert. Dies lässt sich in Ansätzen durch einen Fraktionierungsprozess in Abhängigkeit der Temperaturstabilität der Lanthanoide oder deren Umsetzungsprodukten erklären. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschließend geklärt werden.rnDie generelle Anwendbarkeit und unter Umständen notwendige, komplementäre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit großer untersuchter Probenfläche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhältnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermöglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss über die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflächen könnte diese jedoch in Folge der in verschiedenen Partikelgrößen resultierenden Fraktionierung zu systematisch verfälschten Ergebnissen führen. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewährleisten, sollte eine Durchführung weiterer Sprengungen auf unterschiedlichen Probenoberflächen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
Background: The bacterial colonization of the oral mucosa was evaluated in patients with asymptomatic oral lichen planus (OLP) and compared to the microbiologic status in mucosally healthy subjects. Methods: Bacteria from patients with clinically and histopathologically diagnosed OLP from the Stomatology Service, Department of Oral Surgery and Stomatology, School of Dental Medicine, University of Bern, were collected with a non-invasive swab system. Samples were taken from OLP lesions on the gingiva and from non-affected sites on the contralateral side of the mouth. The control population did not have OLP and was recruited from the student clinic. All samples were processed with the checkerboard DNA-DNA hybridization method using well-defined bacterial species for the analysis. Results: Significantly higher bacterial counts of Bacteroides ureolyticus (P = 0.001), Dialister species (sp.) (P = 0.006), Staphylococcus haemolyticus (P = 0.007), and Streptococcus agalactiae (P = 0.006) were found in samples taken from OLP lesions compared to sites with no clinical evidence of OLP. Significantly higher bacterial counts were found for Capnocytophaga sputigena, Eikenella corrodens, Lactobacillus crispatus, Mobiluncus curtisii, Neisseria mucosa, Prevotella bivia, Prevotella intermedia, and S. agalactiae at sites with lesions in subjects with OLP compared to sites in control subjects (P <0.001). Conclusions: Microbiologic differences were found between sites with OLP and sites in subjects without a diagnosis of OLP. Specifically, higher counts of staphylococci and S. agalactiae were found in OLP lesions.
Resumo:
Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.
Resumo:
Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.