109 resultados para range-separation parameter
Resumo:
This paper describes a new reliable method, based on modal interval analysis (MIA) and set inversion (SI) techniques, for the characterization of solution sets defined by quantified constraints satisfaction problems (QCSP) over continuous domains. The presented methodology, called quantified set inversion (QSI), can be used over a wide range of engineering problems involving uncertain nonlinear models. Finally, an application on parameter identification is presented
Resumo:
La tècnica de la microdiàlisis cerebral (MDC) és un instrument que proporciona informació rellevant en la monitorització del metabolisme cerebral en els pacients neurocrítics. El lactat i l’índex lactat-piruvat (ILP) són dos marcadors utilitzats per a la detecció de la hipòxia cerebral en pacients que han patit un traumatisme cranioencefàlic (TCE). Aquests dos marcadors poden estar anormalment elevats en circumstàncies que no cursen amb hipòxia tissular. Per una altra banda la recent aparició dels catèters de MDC amb porus de major mida denominats d’”alta resolució”, permet ampliar el rang de molècules que es poden detectar en el dialitzat. Objectius: 1) descriure les característiques del metabolisme energètic cerebral que s’observa en la fase aguda dels pacients que han patit un TCE en base als dos indicadors del metabolisme anaeròbic: lactat i ILP, i 2) determinar la recuperació relativa (RR) de les molècules implicades en la resposta neuroinflamatòria: de IL-1β, IL- 6, IL-8 i IL-10. Material i mètodes: Es van seleccionar 46 pacients d’una cohort de pacients amb TCE moderat o greu ingressats a la Unitat de Cures Intensives de l’Hospital Universitari de la Vall d’Hebron i monitoritzats amb MDC. Es van analitzar els nivells de lactat i ILP i es va correlacionar amb els nivells de PtiO2. Es van realitzar experiments in vitro per estudiar la recuperació de les membranes de 100 KDa per tal de poder interpretar posteriorment els nivells reals de les molècules estudiades en l’espai extracel•lular del teixit cerebral. Resultats: La concordança entre el lactat i l’índex LP per a determinar episodis de disfunció metabòlica va ser dèbil (índex de kappa = 0,36, IC 95%: 0,34-0,39). Més del 80% dels casos en què el lactat i l’índex LP es trobaven incrementats, els valors de la PtiO2 es van trobar dins els rangs de normalitat (PtiO2&15mmHg). La recuperació de les citoquines a través de la membrana de microdiàlisis va ser menor de l’esperat tenint en compte la mida dels porus de la membrana. Conclusions: el lactat i l’índex LP elevats va ser una troballa freqüent després d’un TCE i no es va relacionar, en la majoria de casos, amb episodis d’hipòxia tissular. Per un altra part la mida del porus de la membrana no és l’únic paràmetre indicador de la RR de macromolècules.
Resumo:
PROPÒSIT: Estudiar l'efecte de la cirurgia LASIK en la llum dispersa i la sensibilitat al contrast. MÈTODES: Vint-i-vuit pacients van ser tractats amb LASIK. La qualitat visual es va avaluar abans de l'operació i dos mesos després. RESULTATS: La mitjana de llum dispersa i la sensibilitat al contrast abans de l'operació no va canviar en dos mesos després. Només un ull tenia un marcat augment en la llum dispersa. Nou ulls van presentar una lleugera disminució en la sensibilitat al contrast. S'han trobat dues complicacions. CONCLUSIÓ: Després de LASIK la majoria dels pacients (80%) no van tenir complicacions i van mantenir la seva qualitat visual. Uns pocs pacients (16%) van tenir una mica de qualitat visual disminuïda. Molt pocs (4%) van tenir complicacions clíniques amb disminució en la qualitat visual.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
Using a panel of 48 provinces for four years we empirically analyze a series of temporary policies aimed at curbing fuel consumption implemented in Spain between March and June 2011. The first policy was a reduction in the speed limit in highways. The second policy was an increase in the biofuel content of fuels used in the transport sector. The third measure was a reduction of 5% in commuting and regional train fares that resulted in two major metropolitan areas reducing their overall fare for public transit. The results indicate that the speed limit reduction in highways reduced gasoline consumption by between 2% and 3%, while an increase in the biofuel content of gasoline increased this consumption. This last result is consistent with experimental evidence that indicates that mileage per liter falls with an increase in the biofuel content in gasolines. As for the reduction in transit fares, we do not find a significant effect for this policy. However, in specifications including the urban transit fare for the major cities in each province the estimated cross-price elasticity of the demand for gasoline -used as a proxy for car use- with respect to the price of transit is within the range reported in the literature. This is important since one of the main eficiency justification for subsidizing public transit rests on the positive value of this parameter and most of the estimates reported in the literature are quite dated.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
This paper deals with fault detection and isolation problems for nonlinear dynamic systems. Both problems are stated as constraint satisfaction problems (CSP) and solved using consistency techniques. The main contribution is the isolation method based on consistency techniques and uncertainty space refining of interval parameters. The major advantage of this method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements, and model errors. Interval calculations bring independence from the assumption of monotony considered by several approaches for fault isolation which are based on observers. An application to a well known alcoholic fermentation process model is presented
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
We argue the importance both of developing simple sufficientconditions for the stability of general multiclass queueing networks and also of assessing such conditions under a range of assumptions on the weight of the traffic flowing between service stations. To achieve the former, we review a peak-rate stability condition and extend its range of application and for the latter, we introduce a generalisation of the Lu-Kumar network on which the stability condition may be tested for a range of traffic configurations. The peak-rate condition is close to exact when the between-station traffic is light, but degrades as this traffic increases.
Resumo:
The spread of mineral particles over southwestern, western, and central Europeresulting from a strong Saharan dust outbreak in October 2001 was observed at10 stations of the European Aerosol Research Lidar Network (EARLINET). For the firsttime, an optically dense desert dust plume over Europe was characterized coherentlywith high vertical resolution on a continental scale. The main layer was located abovethe boundary layer (above 1-km height above sea level (asl)) up to 3–5-km height, andtraces of dust particles reached heights of 7–8 km. The particle optical depth typicallyranged from 0.1 to 0.5 above 1-km height asl at the wavelength of 532 nm, andmaximum values close to 0.8 were found over northern Germany. The lidar observationsare in qualitative agreement with values of optical depth derived from Total OzoneMapping Spectrometer (TOMS) data. Ten-day backward trajectories clearly indicated theSahara as the source region of the particles and revealed that the dust layer observed,e.g., over Belsk, Poland, crossed the EARLINET site Aberystwyth, UK, and southernScandinavia 24–48 hours before. Lidar-derived particle depolarization ratios,backscatter- and extinction-related A ° ngstro¨m exponents, and extinction-to-backscatterratios mainly ranged from 15 to 25%, 0.5 to 0.5, and 40–80 sr, respectively, within thelofted dust plumes. A few atmospheric model calculations are presented showing the dustconcentration over Europe. The simulations were found to be consistent with thenetwork observations.
Resumo:
A simplc formulation Io compute thc envelope correlation of anantenna divemiry system is dcrired. 11 is shown how to compute theenvelope correlation hom the S-parameter descnplian of the antennasystem. This approach has the advantage that i t does not require thecomputation nor the measurement of the radiation panem of theantenna system. It also offers the advantage of providing a clcaunderstanding ofthe effects ofmutual coupling and input match on thediversity performance of the antcnnii system.
Resumo:
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0 C and 20 C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.
The effects of electron-hole separation on the photoconductivity of individual metal oxide nanowires
Resumo:
The responses of individual ZnO nanowires to UV light demonstrate that the persistent photoconductivity (PPC) state is directly related to the electron¿hole separation near the surface. Our results demonstrate that the electrical transport in these nanomaterials is influenced by the surface in two different ways. On the one hand, the effective mobility and the density of free carriers are determined by recombination mechanisms assisted by the oxidizing molecules in air. This phenomenon can also be blocked by surface passivation. On the other hand, the surface built-in potential separates the photogenerated electron¿hole pairs and accumulates holes at the surface. After illumination, the charge separation makes the electron¿hole recombination difficult and originates PPC. This effect is quickly reverted after increasing either the probing current (self-heating by Joule dissipation) or the oxygen content in air (favouring the surface recombination mechanisms). The model for PPC in individual nanowires presented here illustrates the intrinsic potential of metal oxide nanowires to develop optoelectronic devices or optochemical sensors with better and new performances.
Resumo:
We work out a semiclassical theory of shot noise in ballistic n+-i-n+ semiconductor structures aiming at studying two fundamental physical correlations coming from Pauli exclusion principle and long-range Coulomb interaction. The theory provides a unifying scheme which, in addition to the current-voltage characteristics, describes the suppression of shot noise due to Pauli and Coulomb correlations in the whole range of system parameters and applied bias. The whole scenario is summarized by a phase diagram in the plane of two dimensionless variables related to the sample length and contact chemical potential. Here different regions of physical interest can be identified where only Coulomb or only Pauli correlations are active, or where both are present with different relevance. The predictions of the theory are proven to be fully corroborated by Monte Carlo simulations.