897 resultados para cyber-physical systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to entrap drugs within vehicles and subsequently release them has led to new treatments for a number of diseases. Based on an associative phase separation and interfacial diffusion approach, we developed a way to prepare DNA gel particles without adding any kind of cross-linker or organic solvent. Among the various agents studied, cationic surfactants offered particularly efficient control for encapsulation and DNA release from these DNA gel particles. The driving force for this strong association is the electrostatic interaction between the two components, as induced by the entropic increase due to the release of the respective counter-ions. However, little is known about the influence of the respective counter-ions on this surfactant-DNA interaction. Here we examined the effect of different counter-ions on the formation and properties of the DNA gel particles by mixing DNA (either single-(ssDNA) or double-stranded (dsDNA)) with the single chain surfactant dodecyltrimethylammonium (DTA). In particular, we used as counter-ions of this surfactant the hydrogen sulfate and trifluoromethane sulfonate anions and the two halides, chloride and bromide. Effects on the morphology of the particles obtained, the encapsulation of DNA and its release, as well as the haemocompatibility of these particles are presented, using counter-ion structure and DNA conformation as controlling parameters. Analysis of the data indicates that the degree of counter-ion dissociation from the surfactant micelles and the polar/hydrophobic character of the counter-ion are important parameters in the final properties of the particles. The stronger interaction with amphiphiles for ssDNA than for dsDNA suggests the important role of hydrophobic interactions in DNA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the canonical equilibrium of systems with long-range forces in competition. These forces create a modulation in the interaction potential and modulated phases appear at the system scale. The structure of these phases differentiate this system from monotonic potentials, where only the mean-field and disordered phases exist. With increasing temperature, the system switches from one ordered phase to another through a first-order phase transition. Both mean-field and modulated phases may be stable, even at zero temperature, and the long-range nature of the interaction will lead to metastability characterized by extremely long time scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two main tools to determine the dynamical and physical parameters of exoplanet systems are the radial velocity (RV) measurements and, when available, transit timings. The two techniques are complementary: The RV's allow us to know some of the orbital elements while the transit timings allow us to obtain the orbital inclination and planetary radius, impossible of obtain from the RV, and to resolve the indetermination in the determination of the planet mass from the RV's. The space observation of transiting planets is however not limited to transit times. They extend to long periods of time and are precise enough to provide information on variations along the orbit. Besides the effects of stellar rotation, deserve mention the Doppler shift in the radiation flux, as consequence of stellar movement around the center of mass, or Beaming Effect (BE); the Ellipsoidal Variability (EV) due to the tidal deformation of the star due to the gravitation of its close companion; and the Reflection (ER) of the stellar radiation incident on the planet and re-emitted to the observer. In the case of large hot Jupiters, these effects are enhanced by the strong gravitational interaction and the analysis of the light variation allows us independent estimates of the mass and radius of planet. The planetary system CoRoT 3 is favorable for such analysis. In this case, the secondary is a brown dwarf whose mass is of the order of 22Mj. We show results obtained from the analysis of 35 RV measurements, 236999 photometric observations and 11 additional RV observations made during a transit to determine the star rotation via the Rossiter-McLaughlin effect. The results obtained from this determination are presented in this communication. The results are compared to those resulting from other determinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The University of São Paulo has been experiencing the increase in contents in electronic and digital formats, distributed by different suppliers and hosted remotely or in clouds, and is faced with the also increasing difficulties related to facilitating access to this digital collection by its users besides coexisting with the traditional world of physical collections. A possible solution was identified in the new generation of systems called Web Scale Discovery, which allow better management, data integration and agility of search. Aiming to identify if and how such a system would meet the USP demand and expectation and, in case it does, to identify what the analysis criteria of such a tool would be, an analytical study with an essentially documental base was structured, as from a revision of the literature and from data available in official websites and of libraries using this kind of resources. The conceptual base of the study was defined after the identification of software assessment methods already available, generating a standard with 40 analysis criteria, from details on the unique access interface to information contents, web 2.0 characteristics, intuitive interface, facet navigation, among others. The details of the studies conducted into four of the major systems currently available in this software category are presented, providing subsidies for the decision-making of other libraries interested in such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doctoral program: Motor praxiology, physical education and sport training

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabajo realizado por: Garijo, J. C., Hernández León, S.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronization is a key issue in any communication system, but it becomes fundamental in the navigation systems, which are entirely based on the estimation of the time delay of the signals coming from the satellites. Thus, even if synchronization has been a well known topic for many years, the introduction of new modulations and new physical layer techniques in the modern standards makes the traditional synchronization strategies completely ineffective. For this reason, the design of advanced and innovative techniques for synchronization in modern communication systems, like DVB-SH, DVB-T2, DVB-RCS, WiMAX, LTE, and in the modern navigation system, like Galileo, has been the topic of the activity. Recent years have seen the consolidation of two different trends: the introduction of Orthogonal Frequency Division Multiplexing (OFDM) in the communication systems, and of the Binary Offset Carrier (BOC) modulation in the modern Global Navigation Satellite Systems (GNSS). Thus, a particular attention has been given to the investigation of the synchronization algorithms in these areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stabilization of nanoparticles against their irreversible particle aggregation and oxidation reactions. is a requirement for further advancement in nanoparticle science and technology. For this reason the research aim on this topic focuses on the synthesis of various metal nanoparticles protected with monolayers containing different reactive head groups and functional tail groups. In this work cuprous bromide nanocrystals haave been synthetized with a diameter of about 20 nanometers according to a new sybthetic method adding dropwise ascorbic acid to a water solution of lithium bromide and cupric chloride under continuous stirring and nitrogen flux. Butane thiolate Cu protected nanoparticles have been synthetized according to three different syntesys methods. Their morphologies appear related to the physicochemical conditions during the synthesis and to the dispersing medium used to prepare the sample. Synthesis method II allows to obtain stable nanoparticles of 1-2 nm in size both isolated and forming clusters. Nanoparticle cluster formation was enhanced as water was used as dispersing medium probably due to the idrophobic nature of the butanethiolate layers coating the nanoparticle surface. Synthesis methods I and III lead to large unstable spherical nanoparticles with size ranging between 20 to 50 nm. These nanoparticles appeared in the TEM micrograph with the same morphology independently on the dispersing medium used in the sample preparation. The stability and dimensions of the copper nanoparticles appear inversely related. Using the same methods above described for the butanethiolate protected copper nanoparticles 4-methylbenzenethiol protected copper nanoparticles have been prepared. Diffractometric and spectroscopic data reveal that decomposition processes didn’t occur in both the 4-methylbenzenethiol copper protected nanoparticles precipitates from formic acid and from water in a period of time six month long. Se anticarcinogenic effects by multiple mechanisms have been extensively investigated and documented and Se is defined a genuine nutritional cancer-protecting element and a significant protective effect of Se against major forms of cancer. Furthermore phloroglucinol was found to possess cytoprotective effects against oxidative stress, thanks to reactive oxygen species (ROS) which are associated with cells and tissue damages and are the contributing factors for inflammation, aging, cancer, arteriosclerosis, hypertension and diabetes. The goal of our work has been to set up a new method to synthesize in mild conditions amorphous Se nanopaticles surface capped with phloroglucinol, which is used during synthesis as reducing agent to obtain stable Se nanoparticles in ethanol, performing the synergies offered by the specific anticarcinogenic properties of Se and the antioxiding ones of phloroalucinol. We have synthesized selenium nanoparticles protected by phenolic molecules chemically bonded to their surface. The phenol molecules coating the nanoparticles surfaces form low ordered arrays as can be seen from the wider shape of the absorptions in the FT-IR spectrum with respect to those appearing in that of crystalline phenol. On the other hand, metallic nanoparticles with unique optical properties, facile surface chemistry and appropriate size scale are generating much enthusiasm in nanomedicine. In fact Au nanoparticles has immense potential for both cancer diagnosis and therapy. Especially Au nanoparticles efficiently convert the strongly adsorbed light into localized heat, which can be exploited for the selective laser photothermal therapy of cancer. According to the about, metal nanoparticles-HA nanocrystals composites should have tremendous potential in novel methods for therapy of cancer. 11 mercaptoundecanoic surface protected Au4Ag1 nanoparticles adsorbed on nanometric apathyte crystals we have successfully prepared like an anticancer nanoparticles deliver system utilizing biomimetic hydroxyapatyte nanocrystals as deliver agents. Furthermore natural chrysotile, formed by densely packed bundles of multiwalled hollow nanotubes, is a mineral very suitable for nanowires preparation when their inner nanometer-sized cavity is filled with a proper material. Bundles of chrysotile nanotubes can then behave as host systems, where their large interchannel separation is actually expected to prevent the interaction between individual guest metallic nanoparticles and act as a confining barrier. Chrysotile nanotubes have been filled with molten metals such as Hg, Pb, Sn, semimetals, Bi, Te, Se, and with semiconductor materials such as InSb, CdSe, GaAs, and InP using both high-pressure techniques and metal-organic chemical vapor deposition. Under hydrothermal conditions chrysotile nanocrystals have been synthesized as a single phase and can be utilized as a very suitable for nanowires preparation filling their inner nanometer-sized cavity with metallic nanoparticles. In this research work we have synthesized and characterized Stoichiometric synthetic chrysotile nanotubes have been partially filled with bi and monometallic highly monodispersed nanoparticles with diameters ranging from 1,7 to 5,5 nm depending on the core composition (Au, Au4Ag1, Au1Ag4, Ag). In the case of 4 methylbenzenethiol protected silver nanoparticles, the filling was carried out by convection and capillarity effect at room temperature and pressure using a suitable organic solvent. We have obtained new interesting nanowires constituted of metallic nanoparticles filled in inorganic nanotubes with a inner cavity of 7 nm and an isolating wall with a thick ranging from 7 to 21 nm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My work concerns two different systems of equations used in the mathematical modeling of semiconductors and plasmas: the Euler-Poisson system and the quantum drift-diffusion system. The first is given by the Euler equations for the conservation of mass and momentum, with a Poisson equation for the electrostatic potential. The second one takes into account the physical effects due to the smallness of the devices (quantum effects). It is a simple extension of the classical drift-diffusion model which consists of two continuity equations for the charge densities, with a Poisson equation for the electrostatic potential. Using an asymptotic expansion method, we study (in the steady-state case for a potential flow) the limit to zero of the three physical parameters which arise in the Euler-Poisson system: the electron mass, the relaxation time and the Debye length. For each limit, we prove the existence and uniqueness of profiles to the asymptotic expansion and some error estimates. For a vanishing electron mass or a vanishing relaxation time, this method gives us a new approach in the convergence of the Euler-Poisson system to the incompressible Euler equations. For a vanishing Debye length (also called quasineutral limit), we obtain a new approach in the existence of solutions when boundary layers can appear (i.e. when no compatibility condition is assumed). Moreover, using an iterative method, and a finite volume scheme or a penalized mixed finite volume scheme, we numerically show the smallness condition on the electron mass needed in the existence of solutions to the system, condition which has already been shown in the literature. In the quantum drift-diffusion model for the transient bipolar case in one-space dimension, we show, by using a time discretization and energy estimates, the existence of solutions (for a general doping profile). We also prove rigorously the quasineutral limit (for a vanishing doping profile). Finally, using a new time discretization and an algorithmic construction of entropies, we prove some regularity properties for the solutions of the equation obtained in the quasineutral limit (for a vanishing pressure). This new regularity permits us to prove the positivity of solutions to this equation for at least times large enough.