971 resultados para P(x)-laplacian Problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho, fizemos uma investigação sobre o estudo teórico das características I x V e C x V de Nanotubo Carbono de Parede Simples (NCPS) puro, com Nitrogênio substitucional carregado com cargas -1 (caracterizando um indicativo de dopagem tipo n) e +1 (caracterizando um indicativo de dopagem tipo p) e na presença de grupos doador (NO2)-aceitador (NH2), através da simulação computacional do estado fundamental de NCPS, bem como de sua estrutura eletrônica e propriedades ópticas, utilizando parametrizações semi-empíricas AM1 (Austin Mudel 1) e ZINDO/S-ClS (Zerner´s lntermediate Neglect of Differential Orbital/Spectroscopic - Cunfiguration lnteraction Single) derivadas da Teoria de Hartree-Fock baseada em técnicas de química quântica. Por meio deste modelo teórico analisamos as propriedades ópticas e eletrônicas, de maior interesse para esses materiais, a fim de se entender a melhor forma de interação desses materiais na fabricação de dispositivos eletrônicos, tais como TECs (Transistores de Efeito de Campo) ou em aplicações em optoeletrônica tais como DEL (Dispositivo Emissor de Luz). Observamos que NCPS com Nitrogênio substitucional apresentam defeitos conformacionais do tipo polarônico. Fizemos as curvas dos espectros UV-visível de Absorção para NCPS armchair e zigzag puro, com Nitrogênio substitucional carregado com cargas (-1 e +1) e na presença de grupos doador (NO2)-aceitador (NH2), quando perturbados por intensidades diferentes de campo elétrico. Verificamos que em NCPS zigzag ao aumentarmos a intensidade do campo elétrico, suas curvas sofrem grandes perturbações. Obtivemos as curvas p x E, I x V e C x V para esses NCPS, concluímos que NCPS armchair possui comportamento resistor, pois suas curvas são lineares e zigzag possui comportamento semelhante ao dos dispositivos eletrônicos importantes para o avanço tecnológico. Assim, nossos resultados estão de bom acordo com os resultados experimentais e teóricos de NCPS puro e com Nitrogênio encontrados na literatura.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A loop is said to be automorphic if its inner mappings are automorphisms. For a prime p, denote by A(p) the class of all 2-generated commutative automorphic loops Q possessing a central subloop Z congruent to Z(p) such that Q/Z congruent to Z(p) x Z(p). Upon describing the free 2-generated nilpotent class two commutative automorphic loop and the free 2-generated nilpotent class two commutative automorphic p-loop F-p in the variety of loops whose elements have order dividing p(2) and whose associators have order dividing p, we show that every loop of A(p) is a quotient of F-p by a central subloop of order p(3). The automorphism group of F-p induces an action of GL(2)(p) on the three-dimensional subspaces of Z(F-p) congruent to (Z(p))(4). The orbits of this action are in one-to-one correspondence with the isomorphism classes of loops from A(p). We describe the orbits, and hence we classify the loops of A(p) up to isomorphism. It is known that every commutative automorphic p-loop is nilpotent when p is odd, and that there is a unique commutative automorphic loop of order 8 with trivial center. Knowing A(p) up to isomorphism, we easily obtain a classification of commutative automorphic loops of order p(3). There are precisely seven commutative automorphic loops of order p(3) for every prime p, including the three abelian groups of order p(3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente Tesis Doctoral aborda la aplicación de métodos meshless, o métodos sin malla, a problemas de autovalores, fundamentalmente vibraciones libres y pandeo. En particular, el estudio se centra en aspectos tales como los procedimientos para la resolución numérica del problema de autovalores con estos métodos, el coste computacional y la viabilidad de la utilización de matrices de masa o matrices de rigidez geométrica no consistentes. Además, se acomete en detalle el análisis del error, con el objetivo de determinar sus principales fuentes y obtener claves que permitan la aceleración de la convergencia. Aunque en la actualidad existe una amplia variedad de métodos meshless en apariencia independientes entre sí, se han analizado las diferentes relaciones entre ellos, deduciéndose que el método Element-Free Galerkin Method [Método Galerkin Sin Elementos] (EFGM) es representativo de un amplio grupo de los mismos. Por ello se ha empleado como referencia en este análisis. Muchas de las fuentes de error de un método sin malla provienen de su algoritmo de interpolación o aproximación. En el caso del EFGM ese algoritmo es conocido como Moving Least Squares [Mínimos Cuadrados Móviles] (MLS), caso particular del Generalized Moving Least Squares [Mínimos Cuadrados Móviles Generalizados] (GMLS). La formulación de estos algoritmos indica que la precisión de los mismos se basa en los siguientes factores: orden de la base polinómica p(x), características de la función de peso w(x) y forma y tamaño del soporte de definición de esa función. Se ha analizado la contribución individual de cada factor mediante su reducción a un único parámetro cuantificable, así como las interacciones entre ellos tanto en distribuciones regulares de nodos como en irregulares. El estudio se extiende a una serie de problemas estructurales uni y bidimensionales de referencia, y tiene en cuenta el error no sólo en el cálculo de autovalores (frecuencias propias o carga de pandeo, según el caso), sino también en términos de autovectores. This Doctoral Thesis deals with the application of meshless methods to eigenvalue problems, particularly free vibrations and buckling. The analysis is focused on aspects such as the numerical solving of the problem, computational cost and the feasibility of the use of non-consistent mass or geometric stiffness matrices. Furthermore, the analysis of the error is also considered, with the aim of identifying its main sources and obtaining the key factors that enable a faster convergence of a given problem. Although currently a wide variety of apparently independent meshless methods can be found in the literature, the relationships among them have been analyzed. The outcome of this assessment is that all those methods can be grouped in only a limited amount of categories, and that the Element-Free Galerkin Method (EFGM) is representative of the most important one. Therefore, the EFGM has been selected as a reference for the numerical analyses. Many of the error sources of a meshless method are contributed by its interpolation/approximation algorithm. In the EFGM, such algorithm is known as Moving Least Squares (MLS), a particular case of the Generalized Moving Least Squares (GMLS). The accuracy of the MLS is based on the following factors: order of the polynomial basis p(x), features of the weight function w(x), and shape and size of the support domain of this weight function. The individual contribution of each of these factors, along with the interactions among them, has been studied in both regular and irregular arrangement of nodes, by means of a reduction of each contribution to a one single quantifiable parameter. This assessment is applied to a range of both one- and two-dimensional benchmarking cases, and includes not only the error in terms of eigenvalues (natural frequencies or buckling load), but also of eigenvectors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following problem, suggested by Laguerre’s Theorem (1884), remains open: Characterize all real sequences {μk} k=0...∞ which have the zero-diminishing property; that is, if k=0...n, p(x) = ∑(ak x^k) is any P real polynomial, then k=0...n, p(x) = ∑(μk ak x^k) has no more real zeros than p(x). In this paper this problem is solved under the additional assumption of a weak growth condition on the sequence {μk} k=0...∞, namely lim n→∞ | μn |^(1/n) < ∞. More precisely, it is established that the real sequence {μk} k≥0 is a weakly increasing zerodiminishing sequence if and only if there exists σ ∈ {+1,−1} and an entire function n≥1, Φ(z)= be^(az) ∏(1+ x/αn), a, b ∈ R^1, b =0, αn > 0 ∀n ≥ 1, ∑(1/αn) < ∞, such that µk = (σ^k)/Φ(k), ∀k ≥ 0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Partial Waves Analysis (PWA) of γp → Δ ++X → pπ+ π - (η) data taken with the CLAS detector at Jefferson Lab is presented in this work. This reaction is of interest because the Δ++ restricts the isospin of the possible X states, leaving the PWA with a smaller combination of partial waves, making it ideal to look for exotic mesons. It was proposed by Isgur and Paton that photoproduction is a plausible source for the Jpc=1–+ state through flux tube excitation. The π1(1400) is such a state that has been produced with the use of hadron production but it has yet to be seen in photoproduction. A mass independent amplitude analysis of this channel was performed, followed by a mass dependent fit to extract the resonance parameters. The procedure used an event-based maximum likelihood method to maintain all correlations in the kinematics. The intensity and phase motion is mapped out for the contributing signals without requiring assumptions about the underlying processes. The strength of the PWA is in the analysis of the phase motion, which for resonance behavior is well defined. In the data presented, the ηπ– invariant mass spectrum shows contributions from the a0(980) and a2(1320) partial waves. No π1 was observed under a clear a2 signal after the angular distributions of the decay products were analyzed using an amplitude analysis. In addition, this dissertation discusses trends in the data, along with the implemented techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We generalize the Liapunov convexity theorem's version for vectorial control systems driven by linear ODEs of first-order p = 1 , in any dimension d ∈ N , by including a pointwise state-constraint. More precisely, given a x ‾ ( ⋅ ) ∈ W p , 1 ( [ a , b ] , R d ) solving the convexified p-th order differential inclusion L p x ‾ ( t ) ∈ co { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e., consider the general problem consisting in finding bang-bang solutions (i.e. L p x ˆ ( t ) ∈ { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e.) under the same boundary-data, x ˆ ( k ) ( a ) = x ‾ ( k ) ( a ) & x ˆ ( k ) ( b ) = x ‾ ( k ) ( b ) ( k = 0 , 1 , … , p − 1 ); but restricted, moreover, by a pointwise state constraint of the type 〈 x ˆ ( t ) , ω 〉 ≤ 〈 x ‾ ( t ) , ω 〉 ∀ t ∈ [ a , b ] (e.g. ω = ( 1 , 0 , … , 0 ) yielding x ˆ 1 ( t ) ≤ x ‾ 1 ( t ) ). Previous results in the scalar d = 1 case were the pioneering Amar & Cellina paper (dealing with L p x ( ⋅ ) = x ′ ( ⋅ ) ), followed by Cerf & Mariconda results, who solved the general case of linear differential operators L p of order p ≥ 2 with C 0 ( [ a , b ] ) -coefficients. This paper is dedicated to: focus on the missing case p = 1 , i.e. using L p x ( ⋅ ) = x ′ ( ⋅ ) + A ( ⋅ ) x ( ⋅ ) ; generalize the dimension of x ( ⋅ ) , from the scalar case d = 1 to the vectorial d ∈ N case; weaken the coefficients, from continuous to integrable, so that A ( ⋅ ) now becomes a d × d -integrable matrix; and allow the directional vector ω to become a moving AC function ω ( ⋅ ) . Previous vectorial results had constant ω, no matrix (i.e. A ( ⋅ ) ≡ 0 ) and considered: constant control-vertices (Amar & Mariconda) and, more recently, integrable control-vertices (ourselves).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis is an examination of how Japanese popular culture products are remade (rimeiku). Adaptation of manga, anime and television drama, from one format to another, frequently occurs within Japan. The rights to these stories and texts are traded in South Korea and Taiwan. The ‘spin-off’ products form part of the Japanese content industry. When products are distributed and remade across geographical boundaries, they have a multi-dimensional aspect and potentially contribute to an evolving cultural re-engagement between Japan and East Asia. The case studies are the television dramas Akai Giwaku and Winter Sonata and two manga, Hana yori Dango and Janguru Taitei. Except for the television drama Winter Sonata these texts originated in Japan. Each study shows how remaking occurs across geographical borders. The study argues that Japan has been slow to recognise the value of its popular culture through regional and international media trade. Japan is now taking steps to remedy this strategic shortfall to enable the long-term viability of the Japanese content industry. The study includes an examination of how remaking raises legal issues in the appropriation of media content. Unauthorised copying and piracy contributes to loss of financial value. To place the three Japanese cultural products into a historical context, the thesis includes an overview of Japanese copying culture from its early origins through to the present day. The thesis also discusses the Meiji restoration and the post-World War II restructuring that resulted in Japan becoming a regional media powerhouse. The localisation of Japanese media content in South Korea and Taiwan also brings with it significant cultural influences, which may be regarded as contributing to a better understanding of East Asian society in line with the idea of regional ‘harmony’. The study argues that the commercial success of Japanese products beyond Japan is governed by perceptions of the quality of the story and by the cultural frames of the target audience. The thesis draws on audience research to illustrate the loss or reinforcement of national identity as a consequence of cross-cultural trade. The thesis also examines the contribution to Japanese ‘soft power’ (Nye, 2004, p. x). The study concludes with recommendations for the sustainability of the Japanese media industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This experimental study examines the effect on performance and emission outputs of a compression ignition engine operating on biodiesels of varying carbon chain length and the degree of unsaturation. A well-instrumented, heavy-duty, multi-cylinder, common-rail, turbo-charged diesel engine was used to ensure that the results contribute in a realistic way to the ongoing debate about the impact of biofuels. Comparative measurements are reported for engine performance as well as the emissions of NOx, particle number and size distribution, and the concentration of the reactive oxygen species (which provide a measure of the toxicity of emitted particles). It is shown that the biodiesels used in this study produce lower mean effective pressure, somewhat proportionally with their lower calorific values; however, the molecular structure has been shown to have little impact on the performance of the engine. The peak in-cylinder pressure is lower for the biodiesels that produce a smaller number of emitted particles, compared to fossil diesel, but the concentration of the reactive oxygen species is significantly higher because of oxygen in the fuels. The differences in the physicochemical properties amongst the biofuels and the fossil diesel significantly affect the engine combustion and emission characteristics. Saturated short chain length fatty acid methyl esters are found to enhance combustion efficiency, reduce NOx and particle number concentration, but results in high levels of fuel consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In continuum one-dimensional space, a coupled directed continuous time random walk model is proposed, where the random walker jumps toward one direction and the waiting time between jumps affects the subsequent jump. In the proposed model, the Laplace-Laplace transform of the probability density function P(x,t) of finding the walker at position at time is completely determined by the Laplace transform of the probability density function φ(t) of the waiting time. In terms of the probability density function of the waiting time in the Laplace domain, the limit distribution of the random process and the corresponding evolving equations are derived.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biodiesels produced from different feedstocks usually have wide variations in their fatty acid methyl ester (FAME) so that their physical properties and chemical composition are also different. The aim of this study is to investigate the effect of the physical properties and chemical composition of biodiesels on engine exhaust particle emissions. Alongside with neat diesel, four biodiesels with variations in carbon chain length and degree of unsaturation have been used at three blending ratios (B100, B50, B20) in a common rail engine. It is found that particle emission increased with the increase of carbon chain length. However, for similar carbon chain length, particle emissions from biodiesel having relatively high average unsaturation are found to be slightly less than that of low average unsaturation. Particle size is also found to be dependent on fuel type. The fuel or fuel mix responsible for higher particle mass (PM) and particle number (PN) emissions is also found responsible for larger particle median size. Particle emissions reduced consistently with fuel oxygen content regardless of the proportion of biodiesel in the blends, whereas it increased with fuel viscosity and surface tension only for higher diesel–biodiesel blend percentages (B100, B50). However, since fuel oxygen content increases with the decreasing carbon chain length, it is not clear which of these factors drives the lower particle emission. Overall, it is evident from the results presented here that chemical composition of biodiesel is more important than its physical properties in controlling exhaust particle emissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent axiomatic derivations of the maximum entropy principle from consistency conditions are critically examined. We show that proper application of consistency conditions alone allows a wider class of functionals, essentially of the form ∝ dx p(x)[p(x)/g(x)] s , for some real numbers, to be used for inductive inference and the commonly used form − ∝ dx p(x)ln[p(x)/g(x)] is only a particular case. The role of the prior densityg(x) is clarified. It is possible to regard it as a geometric factor, describing the coordinate system used and it does not represent information of the same kind as obtained by measurements on the system in the form of expectation values.