964 resultados para Non-coherent spectral efficiency


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation analyzes hospital efficiency using various econometric techniques. The first essay provides additional and recent evidence to the presence of contract management behavior in the U.S. hospital industry. Unlike previous studies, which focus on either an input-demand equation or the cost function of the firm, this paper estimates the two jointly using a system of nonlinear equations. Moreover, it addresses the longitudinal problem of institutions adopting contract management in different years, by creating a matched control group of non-adopters with the same longitudinal distribution as the group under study. The estimation procedure then finds that labor, and not capital, is the preferred input in U.S. hospitals regardless of managerial contract status. With institutions that adopt contract management benefiting from lower labor inefficiencies than the simulated non-contract adopters. These results suggest that while there is a propensity for expense preference behavior towards the labor input, contract managed firms are able to introduce efficiencies over conventional, owner controlled, firms. Using data for the years 1998 through 2007, the second essay investigates the production technology and cost efficiency faced by Florida hospitals. A stochastic frontier multiproduct cost function is estimated in order to test for economies of scale, economies of scope, and relative cost efficiencies. The results suggest that small-sized hospitals experience economies of scale, while large and medium sized institutions do not. The empirical findings show that Florida hospitals enjoy significant scope economies, regardless of size. Lastly, the evidence suggests that there is a link between hospital size and relative cost efficiency. The results of the study imply that state policy makers should be focused on increasing hospital scale for smaller institutions while facilitating the expansion of multiproduct production for larger hospitals. The third and final essay employs a two staged approach in analyzing the efficiency of hospitals in the state of Florida. In the first stage, the Banker, Charnes, and Cooper model of Data Envelopment Analysis is employed in order to derive overall technical efficiency scores for each non-specialty hospital in the state. Additionally, input slacks are calculated and reported in order to identify the factors of production that each hospital may be over utilizing. In the second stage, we employ a Tobit regression model in order to analyze the effects a number of structural, managerial, and environmental factors may have on a hospital’s efficiency. The results indicated that most non-specialty hospitals in the state are operating away from the efficient production frontier. The results also indicate that the structural make up, managerial choices, and level of competition Florida hospitals face have an impact on their overall technical efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing need for fast sampling of explosives in high throughput areas has increased the demand for improved technology for the trace detection of illicit compounds. Detection of the volatiles associated with the presence of the illicit compounds offer a different approach for sensitive trace detection of these compounds without increasing the false positive alarm rate. This study evaluated the performance of non-contact sampling and detection systems using statistical analysis through the construction of Receiver Operating Characteristic (ROC) curves in real-world scenarios for the detection of volatiles in the headspace of smokeless powder, used as the model system for generalizing explosives detection. A novel sorbent coated disk coined planar solid phase microextraction (PSPME) was previously used for rapid, non-contact sampling of the headspace containers. The limits of detection for the PSPME coupled to IMS detection was determined to be 0.5-24 ng for vapor sampling of volatile chemical compounds associated with illicit compounds and demonstrated an extraction efficiency of three times greater than other commercially available substrates, retaining >50% of the analyte after 30 minutes sampling of an analyte spike in comparison to a non-detect for the unmodified filters. Both static and dynamic PSPME sampling was used coupled with two ion mobility spectrometer (IMS) detection systems in which 10-500 mg quantities of smokeless powders were detected within 5-10 minutes of static sampling and 1 minute of dynamic sampling time in 1-45 L closed systems, resulting in faster sampling and analysis times in comparison to conventional solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) analysis. Similar real-world scenarios were sampled in low and high clutter environments with zero false positive rates. Excellent PSPME-IMS detection of the volatile analytes were visualized from the ROC curves, resulting with areas under the curves (AUC) of 0.85-1.0 and 0.81-1.0 for portable and bench-top IMS systems, respectively. Construction of ROC curves were also developed for SPME-GC-MS resulting with AUC of 0.95-1.0, comparable with PSPME-IMS detection. The PSPME-IMS technique provides less false positive results for non-contact vapor sampling, cutting the cost and providing an effective sampling and detection needed in high-throughput scenarios, resulting in similar performance in comparison to well-established techniques with the added advantage of fast detection in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shadows and illumination play an important role when generating a realistic scene in computer graphics. Most of the Augmented Reality (AR) systems track markers placed in a real scene and retrieve their position and orientation to serve as a frame of reference for added computer generated content, thereby producing an augmented scene. Realistic depiction of augmented content with coherent visual cues is a desired goal in many AR applications. However, rendering an augmented scene with realistic illumination is a complex task. Many existent approaches rely on a non automated pre-processing phase to retrieve illumination parameters from the scene. Other techniques rely on specific markers that contain light probes to perform environment lighting estimation. This study aims at designing a method to create AR applications with coherent illumination and shadows, using a textured cuboid marker, that does not require a training phase to provide lighting information. Such marker may be easily found in common environments: most of product packaging satisfies such characteristics. Thus, we propose a way to estimate a directional light configuration using multiple texture tracking to render AR scenes in a realistic fashion. We also propose a novel feature descriptor that is used to perform multiple texture tracking. Our descriptor is an extension of the binary descriptor, named discrete descriptor, and outperforms current state-of-the-art methods in speed, while maintaining their accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives To evaluate the change in masticatory efficiency and quality of life of patients treated with mandibular Kennedy class I removable partial dentures (RPDs) and maxillary complete dentures at the Department of Dentistry of the Federal University of Rio Grande do Norte. Materials and methods A total of 33 Kennedy class I patients were rehabilitated with maxillary complete dentures, and mandibular RPDs were selected for this non-randomized prospective intervention study. The patients had a mean age of 59.1 years. Masticatory efficiency was evaluated by colorimetric assay using fuchsin capsules. The measurements were conducted at baseline and 2 and 6 months after prosthesis insertion. Quality of life was evaluated using the Oral Health Impact Profile (OHIP-14) at baseline and 6 months after denture insertion. The Kolmogorov-Smirnov normality test was applied. Masticatory efficiency was evaluated by repeated measures ANOVA. Oral health-related quality of life was compared using the paired t test. Results There was no statistically significant difference in masticatory efficiency after denture insertion (p = 0.101). Significant differences were found (p = 0.010) for oral health-related quality of life. A significant improvement in psychological discomfort (p < 0.01) and psychological disability (p < 0.01) was observed. Mean difference value (95 % confidence interval) was 6.8 (3.8 to 9.7) points, reflecting a low impact of oral health on quality of life, considering the 0–56 range of variation of the OHIP-14 and a Cohen’s d of 1.13. Conclusion According to the results of the present study, rehabilitation with Kennedy class I RPDs and complete dentures did not influence masticatory efficiency but improved oral health-related quality of life. Clinical relevance The association between the patient’s quality of life and the masticatory efficiency is important for treatment predictability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.

In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.

Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of III-nitride materials (InN, GaN and AlN) gained huge research momentum after breakthroughs in the production light emitting diodes (LEDs) and laser diodes (LDs) over the past two decades. Last year, the Nobel Prize in Physics was awarded jointly to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for inventing a new energy efficient and environmental friendly light source: blue light-emitting diode (LED) from III-nitride semiconductors in the early 1990s. Nowadays, III-nitride materials not only play an increasingly important role in the lighting technology, but also become prospective candidates in other areas, for example, the high frequency (RF) high electron mobility transistor (HEMT) and photovoltaics. These devices require the growth of high quality III-nitride films, which can be prepared using metal organic vapour phase epitaxy (MOVPE). The main aim of my thesis is to study and develop the growth of III-nitride films, including AlN, u-AlGaN, Si-doped AlGaN, and InAlN, serving as sample wafers for fabrication of ultraviolet (UV) LEDs, in order to replace the conventional bulky, expensive and environmentally harmful mercury lamp as new UV light sources. For application to UV LEDs, reducing the threading dislocation density (TDD) in AlN epilayers on sapphire substrates is a key parameter for achieving high-efficiency AlGaNbased UV emitters. In Chapter 4, after careful and systematic optimisation, a working set of conditions, the screw and edge type dislocation density in the AlN were reduced to around 2.2×108 cm-2 and 1.3×109 cm-2 , respectively, using an optimized three-step process, as estimated by TEM. An atomically smooth surface with an RMS roughness of around 0.3 nm achieved over 5×5 µm 2 AFM scale. Furthermore, the motion of the steps in a one dimension model has been proposed to describe surface morphology evolution, especially the step bunching feature found under non-optimal conditions. In Chapter 5, control of alloy composition and the maintenance of compositional uniformity across a growing epilayer surface were demonstrated for the development of u-AlGaN epilayers. Optimized conditions (i.e. a high growth temperature of 1245 °C) produced uniform and smooth film with a low RMS roughness of around 2 nm achieved in 20×20 µm 2 AFM scan. The dopant that is most commonly used to obtain n-type conductivity in AlxGa1-xN is Si. However, the incorporation of Si has been found to increase the strain relaxation and promote unintentional incorporation of other impurities (O and C) during Si-doped AlGaN growth. In Chapter 6, reducing edge-type TDs is observed to be an effective appoach to improve the electric and optical properties of Si-doped AlGaN epilayers. In addition, the maximum electron concentration of 1.3×1019 cm-3 and 6.4×1018 cm-3 were achieved in Si-doped Al0.48Ga0.52N and Al0.6Ga0.4N epilayers as measured using Hall effect. Finally, in Chapter 7, studies on the growth of InAlN/AlGaN multiple quantum well (MQW) structures were performed, and exposing InAlN QW to a higher temperature during the ramp to the growth temperature of AlGaN barrier (around 1100 °C) will suffer a significant indium (In) desorption. To overcome this issue, quasi-two-tempeature (Q2T) technique was applied to protect InAlN QW. After optimization, an intense UV emission from MQWs has been observed in the UV spectral range from 320 to 350 nm measured by room temperature photoluminescence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The establishment and control of oxygen levels in packs of oxygen-sensitive food products such as cheese is imperative in order to maintain product quality over a determined shelf life. Oxygen sensors quantify oxygen concentrations within packaging using a reversible optical measurement process, and this non-destructive nature ensures the entire supply chain can be monitored and can assist in pinpointing negative issues pertaining to product packaging. This study was carried out in a commercial cheese packaging plant and involved the insertion of 768 sensors into 384 flow-wrapped cheese packs (two sensors per pack) that were flushed with 100% carbon dioxide prior to sealing. The cheese blocks were randomly assigned to two different storage groups to assess the effects of package quality, packaging process efficiency, and handling and distribution on package containment. Results demonstrated that oxygen levels increased in both experimental groups examined over the 30-day assessment period. The group subjected to a simulated industrial distribution route and handling procedures of commercial retailed cheese exhibited the highest level of oxygen detected on every day examined and experienced the highest rate of package failure. The study concluded that fluctuating storage conditions, product movement associated with distribution activities, and the possible presence of cheese-derived contaminants such as calcium lactate crystals were chief contributors to package failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With applications ranging from aerospace to biomedicine, additive manufacturing (AM) has been revolutionizing the manufacturing industry. The ability of additive techniques, such as selective laser melting (SLM), to create fully functional, geometrically complex, and unique parts out of high strength materials is of great interest. Unfortunately, despite numerous advantages afforded by this technology, its widespread adoption is hindered by a lack of on-line, real time feedback control and quality assurance techniques. In this thesis, inline coherent imaging (ICI), a broadband, spatially coherent imaging technique, is used to observe the SLM process in 15 - 45 $\mu m$ 316L stainless steel. Imaging of both single and multilayer builds is performed at a rate of 200 $kHz$, with a resolution of tens of microns, and a high dynamic range rendering it impervious to blinding from the process beam. This allows imaging before, during, and after laser processing to observe changes in the morphology and stability of the melt. Galvanometer-based scanning of the imaging beam relative to the process beam during the creation of single tracks is used to gain a unique perspective of the SLM process that has been so far unobservable by other monitoring techniques. Single track processing is also used to investigate the possibility of a preliminary feedback control parameter based on the process beam power, through imaging with both coaxial and 100 $\mu m$ offset alignment with respect to the process beam. The 100 $\mu m$ offset improved imaging by increasing the number of bright A-lines (i.e. with signal greater than the 10 $dB$ noise floor) by 300\%. The overlap between adjacent tracks in a single layer is imaged to detect characteristic fault signatures. Full multilayer builds are carried out and the resultant ICI images are used to detect defects in the finished part and improve upon the initial design of the build system. Damage to the recoater blade is assessed using powder layer scans acquired during a 3D build. The ability of ICI to monitor SLM processes at such high rates with high resolution offers extraordinary potential for future advances in on-line feedback control of additive manufacturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coherent quantum-state manipulation of trapped ions using classical laser fields is a trademark of modern quantum technologies. In this work, we study aspects of work statistics and irreversibility in a single trapped ion due to sudden interaction with the impinging laser. This is clearly an out-of-equilibrium process where work is performed through illumination of an ion by the laser. Starting with the explicit evaluation of the first moments of the work distribution, we proceed to a careful analysis of irreversibility as quantified by the nonequilibrium lag. The treatment employed here is not restricted to the Lamb-Dicke limit, what allows us to investigate the interplay between nonlinearities and irreversibility. We show, for instance, that in the resolved carrier and sideband regimes, variation of the Lamb-Dicke parameter may cause a non-monotonic behavior of the irreversibility indicator. Counterintuitively, we find a working point where nonlinearity helps reversibility, making the sudden quench of the Hamiltonian closer to what would have been obtained quasistatically and isothermally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new variant of the Element-Free Galerkin (EFG) method, that combines the diffraction method, to characterize the crack tip solution, and the Heaviside enrichment function for representing discontinuity due to a crack, has been used to model crack propagation through non-homogenous materials. In the case of interface crack propagation, the kink angle is predicted by applying the maximum tangential principal stress (MTPS) criterion in conjunction with consideration of the energy release rate (ERR). The MTPS criterion is applied to the crack tip stress field described by both the stress intensity factor (SIF) and the T-stress, which are extracted using the interaction integral method. The proposed EFG method has been developed and applied for 2D case studies involving a crack in an orthotropic material, crack along an interface and a crack terminating at a bi-material interface, under mechanical or thermal loading; this is done to demonstrate the advantages and efficiency of the proposed methodology. The computed SIFs, T-stress and the predicted interface crack kink angles are compared with existing results in the literature and are found to be in good agreement. An example of crack growth through a particle-reinforced composite materials, which may involve crack meandering around the particle, is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden optische Filterarrays für hochqualitative spektroskopische Anwendungen im sichtbaren (VIS) Wellenlängenbereich untersucht. Die optischen Filter, bestehend aus Fabry-Pérot (FP)-Filtern für hochauflösende miniaturisierte optische Nanospektrometer, basieren auf zwei hochreflektierenden dielektrischen Spiegeln und einer zwischenliegenden Resonanzkavität aus Polymer. Jeder Filter erlaubt einem schmalbandigem spektralen Band (in dieser Arbeit Filterlinie genannt) ,abhängig von der Höhe der Resonanzkavität, zu passieren. Die Effizienz eines solchen optischen Filters hängt von der präzisen Herstellung der hochselektiven multispektralen Filterfelder von FP-Filtern mittels kostengünstigen und hochdurchsatz Methoden ab. Die Herstellung der multiplen Spektralfilter über den gesamten sichtbaren Bereich wird durch einen einzelnen Prägeschritt durch die 3D Nanoimprint-Technologie mit sehr hoher vertikaler Auflösung auf einem Substrat erreicht. Der Schlüssel für diese Prozessintegration ist die Herstellung von 3D Nanoimprint-Stempeln mit den gewünschten Feldern von Filterkavitäten. Die spektrale Sensitivität von diesen effizienten optischen Filtern hängt von der Genauigkeit der vertikalen variierenden Kavitäten ab, die durch eine großflächige ‚weiche„ Nanoimprint-Technologie, UV oberflächenkonforme Imprint Lithographie (UV-SCIL), ab. Die Hauptprobleme von UV-basierten SCIL-Prozessen, wie eine nichtuniforme Restschichtdicke und Schrumpfung des Polymers ergeben Grenzen in der potenziellen Anwendung dieser Technologie. Es ist sehr wichtig, dass die Restschichtdicke gering und uniform ist, damit die kritischen Dimensionen des funktionellen 3D Musters während des Plasmaätzens zur Entfernung der Restschichtdicke kontrolliert werden kann. Im Fall des Nanospektrometers variieren die Kavitäten zwischen den benachbarten FP-Filtern vertikal sodass sich das Volumen von jedem einzelnen Filter verändert , was zu einer Höhenänderung der Restschichtdicke unter jedem Filter führt. Das volumetrische Schrumpfen, das durch den Polymerisationsprozess hervorgerufen wird, beeinträchtigt die Größe und Dimension der gestempelten Polymerkavitäten. Das Verhalten des großflächigen UV-SCIL Prozesses wird durch die Verwendung von einem Design mit ausgeglichenen Volumen verbessert und die Prozessbedingungen werden optimiert. Das Stempeldesign mit ausgeglichen Volumen verteilt 64 vertikal variierenden Filterkavitäten in Einheiten von 4 Kavitäten, die ein gemeinsames Durchschnittsvolumen haben. Durch die Benutzung der ausgeglichenen Volumen werden einheitliche Restschichtdicken (110 nm) über alle Filterhöhen erhalten. Die quantitative Analyse der Polymerschrumpfung wird in iii lateraler und vertikaler Richtung der FP-Filter untersucht. Das Schrumpfen in vertikaler Richtung hat den größten Einfluss auf die spektrale Antwort der Filter und wird durch die Änderung der Belichtungszeit von 12% auf 4% reduziert. FP Filter die mittels des Volumengemittelten Stempels und des optimierten Imprintprozesses hergestellt wurden, zeigen eine hohe Qualität der spektralen Antwort mit linearer Abhängigkeit zwischen den Kavitätshöhen und der spektralen Position der zugehörigen Filterlinien.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current study, we compared technical efficiency of smallholder rice farmers with and without credit in northern Ghana using data from a farm household survey. We fitted a stochastic frontier production function to input and output data to measure technical efficiency. We addressed self-selection into credit participation using propensity score matching and found that the mean efficiency did not differ between credit users and non-users. Credit-participating households had an efficiency of 63.0 percent compared to 61.7 percent for non-participants. The results indicate significant inefficiencies in production and thus a high scope for improving farmers’ technical efficiency through better use of available resources at the current level of technology. Apart from labour and capital, all the conventional farm inputs had a significant effect on rice production. The determinants of efficiency included the respondent’s age, sex, educational status, distance to the nearest market, herd ownership, access to irrigation and specialisation in rice production. From a policy perspective, we recommend that the credit should be channelled to farmers who demonstrate the need for it and show the commitment to improve their production through external financing. Such a screening mechanism will ensure that the credit goes to the right farmers who need it to improve their technical efficiency.