963 resultados para Non-normal process


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cephalometric analysis is the mensuration of linear and angular measures through demarcation points as distances and lines on teleradiography, and is considered of fundamental importance for diagnosis and orthodontic planning. In this manner, the objective of this research was to compare cephalometric measurements obtained by dentists and radiologists from the analysis of the same radiograph, in a computerized cephalometric analysis program. All research participants marked 18 cephalometric points on a 14-inch notebook computer, as directed by the program itself (Radiocef 2®). From there, they generated 14 cephalometric parameters including skeletal, dental-skeletal, dental and soft tissue. In order to verify the intra-examiner agreement, 10 professionals from each group repeated the marking of the points with a minimum interval of eight days between the two markings. The intra-group variability was calculated based on the coefficients of variation (CV). The comparison between groups was performed using the Student t-test for normally distributed variables, and using the Mann-Whitney test for those with non-normal distribution. In the group of orthodontists, the measurements of Pog and 1-NB, SL, S-Ls Line, S-Li Line and 1.NB showed high internal variability. In the group of radiologists, the same occurred with the values of Pog and 1-NB, S-Ls Line, S-Li Line and 1.NA. In the comparison between groups, all the analyzed linear values and two angular values showed statistically significant differences between radiologists and dentists (p <0.05). According to the results, the interexaminer error in cephalometric analysis requires more attention, but does not come from a specific class of specialists, being either dentists or radiologists.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ce mémoire s’intéresse à l’étude du critère de validation croisée pour le choix des modèles relatifs aux petits domaines. L’étude est limitée aux modèles de petits domaines au niveau des unités. Le modèle de base des petits domaines est introduit par Battese, Harter et Fuller en 1988. C’est un modèle de régression linéaire mixte avec une ordonnée à l’origine aléatoire. Il se compose d’un certain nombre de paramètres : le paramètre β de la partie fixe, la composante aléatoire et les variances relatives à l’erreur résiduelle. Le modèle de Battese et al. est utilisé pour prédire, lors d’une enquête, la moyenne d’une variable d’intérêt y dans chaque petit domaine en utilisant une variable auxiliaire administrative x connue sur toute la population. La méthode d’estimation consiste à utiliser une distribution normale, pour modéliser la composante résiduelle du modèle. La considération d’une dépendance résiduelle générale, c’est-à-dire autre que la loi normale donne une méthodologie plus flexible. Cette généralisation conduit à une nouvelle classe de modèles échangeables. En effet, la généralisation se situe au niveau de la modélisation de la dépendance résiduelle qui peut être soit normale (c’est le cas du modèle de Battese et al.) ou non-normale. L’objectif est de déterminer les paramètres propres aux petits domaines avec le plus de précision possible. Cet enjeu est lié au choix de la bonne dépendance résiduelle à utiliser dans le modèle. Le critère de validation croisée sera étudié à cet effet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our work focuses on experimental and theoretical studies aimed at establishing a fundamental understanding of the principal electrical and optical processes governing the operation of quantum dot solar cells (QDSC) and their feasibility for the realization of intermediate band solar cell (IBSC). Uniform performance QD solar cells with high conversion efficiency have been fabricated using carefully calibrated process recipes as the basis of all reliable experimental characterization. The origin for the enhancement of the short circuit current density (Jsc) in QD solar cells was carefully investigated. External quantum efficiency (EQE) measurements were performed as a measure of the below bandgap distribution of transition states. In this work, we found that the incorporation of self-assembled quantum dots (QDs) interrupts the lattice periodicity and introduce a greatly broadened tailing density of states extending from the bandedge towards mid-gap. A below-bandgap density of states (DOS) model with an extended Urbach tail has been developed. In particular, the below-bandgap photocurrent generation has been attributed to transitions via confined energy states and background continuum tailing states. Photoluminescence measurement is used to measure the energy level of the lowest available state and the coupling effect between QD states and background tailing states because it results from a non-equilibrium process. A basic I-V measurement reveals a degradation of the open circuit voltage (Voc) of QD solar cells, which is related to a one sub-bandgap photon absorption process followed by a direct collection of the generated carriers by the external circuit. We have proposed a modified Shockley-Queisser (SQ) model that predicts the degradation of Voc compared with a reference bulk device. Whenever an energy state within the forbidden gap can facilitate additional absorption, it can facilitate recombination as well. If the recombination is non-radiative, it is detrimental to solar cell performance. We have also investigated the QD trapping effects as deep level energy states. Without an efficient carrier extraction pathway, the QDs can indeed function as mobile carriers traps. Since hole energy levels are mostly connected with hole collection under room temperature, the trapping effect is more severe for electrons. We have tried to electron-dope the QDs to exert a repulsive Coulomb force to help improve the carrier collection efficiency. We have experimentally observed a 30% improvement of Jsc for 4e/dot devices compared with 0e/dot devices. Electron-doping helps with better carrier collection efficiency, however, we have also measured a smaller transition probability from valance band to QD states as a direct manifestation of the Pauli Exclusion Principle. The non-linear performance is of particular interest. With the availability of laser with on-resonance and off-resonance excitation energy, we have explored the photocurrent enhancement by a sequential two-photon absorption (2PA) process via the intermediate states. For the first time, we are able to distinguish the nonlinearity effect by 1PA and 2PA process. The observed 2PA current under off-resonant and on-resonant excitation comes from a two-step transition via the tailing states instead of the QD states. However, given the existence of an extended Urbach tail and the small number of photons available for the intermediate states to conduction band transition, the experimental results suggest that with the current material system, the intensity requirement for an observable enhancement of photocurrent via a 2PA process is much higher than what is available from concentrated sun light. In order to realize the IBSC model, a matching transition strength needs to be achieved between valance band to QD states and QD states to conduction band. However, we have experimentally shown that only a negligible amount of signal can be observed at cryogenic temperature via the transition from QD states to conduction band under a broadband IR source excitation. Based on the understanding we have achieved, we found that the existence of the extended tailing density of states together with the large mismatch of the transition strength from VB to QD and from QD to CB, has systematically put into question the feasibility of the IBSC model with QDs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main purpose of this study is to present an alternative benchmarking approach that can be used by national regulators of utilities. It is widely known that the lack of sizeable data sets limits the choice of the benchmarking method and the specification of the model to set price controls within incentive-based regulation. Ill-posed frontier models are the problem that some national regulators have been facing. Maximum entropy estimators are useful in the estimation of such ill-posed models, in particular in models exhibiting small sample sizes, collinearity and non-normal errors, as well as in models where the number of parameters to be estimated exceeds the number of observations available. The empirical study involves a sample data used by the Portuguese regulator of the electricity sector to set the parameters for the electricity distribution companies in the regulatory period of 2012-2014. DEA and maximum entropy methods are applied and the efficiency results are compared.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las organizaciones se pueden asumir como el resultado de las necesidades históricas del entorno y de los sistemas sociales en su proceso evolutivo, y su sostenibilidad depende de la capacidad de entender su propia complejidad. Este artículo propone que la dotación de condiciones para la innovación, como expresión de la cultura organizacional, es una opción garante de sostenibilidad y requiere de un proceso no uniforme, ni predecible, es decir, un proceso complejo. Las reflexiones apuntan a que las empresas, para garantizar su sostenibilidad deben encontrar, cuasiequilibrios altamente dinámicos y transitorios, del núcleo de los requerimientos funcionales de las demandas y las capacidades estructurales de la oferta. Como resultado de esta reflexión se propone que las decisiones instrumentadas a partir de estas capacidades, pueden aceptarse dentro de un rango amplio de estrategias evolutivas eficientes, en un extenso espectro que va desde la adición cercana a los enfoques económicos ortodoxos hasta los actuales de innovación.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, the Schwarz Information Criterion (SIC) is applied in order to detect change-points in the time series of surface water quality variables. The application of change-point analysis allowed detecting change-points in both the mean and the variance in series under study. Time variations in environmental data are complex and they can hinder the identification of the so-called change-points when traditional models are applied to this type of problems. The assumptions of normality and uncorrelation are not present in some time series, and so, a simulation study is carried out in order to evaluate the methodology’s performance when applied to non-normal data and/or with time correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doctor of Philosophy in Mathematics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wingtip vortices are created by flying airplanes due to lift generation. The vortex interaction with the trailing aircraft has sparked researchers’ interest to develop an efficient technique to destroy these vortices. Different models have been used to describe the vortex dynamics and they all show that, under real flight conditions, the most unstable modes produce a very weak amplification. Another linear instability mechanism that can produce high energy gains in short times is due to the non-normality of the system. Recently, it has been shown that these non-normal perturbations also produce this energy growth when they are excited with harmonic forcing functions. In this study, we analyze numerically the nonlinear evolution of a spatially, pointwise and temporally forced perturbation, generated by a synthetic jet at a given radial distance from the vortex core. This type of perturbation is able to produce high energy gains in the perturbed base flow (10^3), and is also a suitable candidate for use in engineering applications. The flow field is solved for using fully nonlinear three-dimensional direct numerical simulation with a spectral multidomain penalty method model. Our novel results show that the nonlinear effects are able to produce locally small bursts of instability that reduce the intensity of the primary vortex.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gradient plasticity modelling combining a micro-structure-related constitutive description of the local material behaviour with a particular gradient plasticity frame is presented. The constitutive formulation is based on a phase mixture model in which the dislocation cell walls and the cell interiors are considered as separate 'phases', the respective dislocation densities entering as internal variables. Two distinct physical mechanisms, which give rise to gradient plasticity, are considered. The first one is associated with the occurrence of geometrically necessary dislocations leading to first-order strain gradients; the second one is associated with the reaction stresses due to plastic strain incompatibilities between neighbouring grains, which lead to second-order strain gradients. These two separate variants of gradient plasticity were applied to the case of high-pressure torsion: a process known to result in a fairly uniform, ultrafine grained structure of metals. It is shown that the two complementary variants of gradient plasticity can both account for the experimental results, thus resolving a controversial issue of the occurrence of a uniform micro-structure as a result of an inherently non-uniform process. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current copper based circuit technology is becoming a limiting factor in high speed data transfer applications as processors are improving at a faster rate than are developments to increase on board data transfer. One solution is to utilize optical waveguide technology to overcome these bandwidth and loss restrictions. The use of this technology virtually eliminates the heat and cross-talk loss seen in copper circuitry, while also operating at a higher bandwidth. Transitioning current fabrication techniques from small scale laboratory environments to large scale manufacturing presents significant challenges. Optical-to-electrical connections and out-of-plane coupling are significant hurdles in the advancement of optical interconnects. The main goals of this research are the development of direct write material deposition and patterning tools for the fabrication of waveguide systems on large substrates, and the development of out-of-plane coupler components compatible with standard fiber optic cabling. Combining these elements with standard printed circuit boards allows for the fabrication of fully functional optical-electrical-printed-wiring-boards (OEPWBs). A direct dispense tool was designed, assembled, and characterized for the repeatable dispensing of blanket waveguide layers over a range of thicknesses (25-225 µm), eliminating waste material and affording the ability to utilize large substrates. This tool was used to directly dispense multimode waveguide cores which required no UV definition or development. These cores had circular cross sections and were comparable in optical performance to lithographically fabricated square waveguides. Laser direct writing is a non-contact process that allows for the dynamic UV patterning of waveguide material on large substrates, eliminating the need for high resolution masks. A laser direct write tool was designed, assembled, and characterized for direct write patterning waveguides that were comparable in quality to those produced using standard lithographic practices (0.047 dB/cm loss for laser written waveguides compared to 0.043 dB/cm for lithographic waveguides). Straight waveguides, and waveguide turns were patterned at multimode and single mode sizes, and the process was characterized and documented. Support structures such as angled reflectors and vertical posts were produced, showing the versatility of the laser direct write tool. Commercially available components were implanted into the optical layer for out-of-plane routing of the optical signals. These devices featured spherical lenses on the input and output sides of a total internal reflection (TIR) mirror, as well as alignment pins compatible with standard MT design. Fully functional OEPWBs were fabricated featuring input and output out-of-plane optical signal routing with total optical losses not exceeding 10 dB. These prototypes survived thermal cycling (-40°C to 85°C) and humidity exposure (95±4% humidity), showing minimal degradation in optical performance. Operational failure occurred after environmental aging life testing at 110°C for 216 hours.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Efficient numerical models facilitate the study and design of solid oxide fuel cells (SOFCs), stacks, and systems. Whilst the accuracy and reliability of the computed results are usually sought by researchers, the corresponding modelling complexities could result in practical difficulties regarding the implementation flexibility and computational costs. The main objective of this article is to adapt a simple but viable numerical tool for evaluation of our experimental rig. Accordingly, a model for a multi-layer SOFC surrounded by a constant temperature furnace is presented, trained and validated against experimental data. The model consists of a four-layer structure including stand, two interconnects, and PEN (Positive electrode-Electrolyte-Negative electrode); each being approximated by a lumped parameter model. The heating process through the surrounding chamber is also considered. We used a set of V-I characteristics data for parameter adjustment followed by model verification against two independent sets of data. The model results show a good agreement with practical data, offering a significant improvement compared to reduced models in which the impact of external heat loss is neglected. Furthermore, thermal analysis for adiabatic and non-adiabatic process is carried out to capture the thermal behaviour of a single cell followed by a polarisation loss assessment. Finally, model-based design of experiment is demonstrated for a case study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado à Escola Superior de Educação do Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Gerontologia Social.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This report outlines the derivation and application of a non-zero mean, polynomial-exponential covariance function based Gaussian process which forms the prior wind field model used in 'autonomous' disambiguation. It is principally used since the non-zero mean permits the computation of realistic local wind vector prior probabilities which are required when applying the scaled-likelihood trick, as the marginals of the full wind field prior. As the full prior is multi-variate normal, these marginals are very simple to compute.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A non-destructive, diffuse reflectance near infrared spectroscopy (DR-NIRS)approach is considered as a potential tool for determining the component-level structural properties of articular cartilage. To this end, DR-NIRS was applied in vitro to detect structural changes, using principal component analysis as the statistical basis for characterization. The results show that this technique, particularly with first-derivative pretreatment, can distinguish normal, intact cartilage from enzymatically digested cartilage. Further, this paper establishes that the use of DR-NIRS enables the probing of the full depth of the uncalcified cartilage matrix, potentially allowing the assessment of degenerative changes in joint tissue, independent of the site of initiation of the osteoarthritic process.