931 resultados para Error estimator


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objetivo: Valorar cómo influyen las características personales y laborales de los profesionales de enfermería en el error asistencial en hospitalización. Método: Estudio descriptivo transversal realizado en 254 enfermeros de los hospitales públicos de Zaragoza, España. Se administró un cuestionario que contenía preguntas sobre los datos sociodemográficos y laborales del profesional y el error sanitario. Resultados: La muestra estuvo formada predominantemente por mujeres (un 88,6%) con una edad media de 37,4 años. El 45,2% tiene una experiencia profesional menor de 10 años, existiendo un alto índice de movilidad en el puesto de trabajo. Existe una asociación entre la edad, el sexo, la movilidad en el servicio hospitalario y el número de errores cometidos (p<0,05). La sobrecarga laboral y la presión por parte de familiares y pacientes son los factores del entorno laboral que más influyen en el momento de cometer un error asistencial. Conclusiones: Las tasas de error en la práctica enfermera hospitalaria están influenciadas por las características del trabajador y el entorno laboral. Para disminuir su frecuencia habrá que proporcionar a los profesionales la formación adecuada al servicio y prevenir los factores de riesgo modificables como el exceso de cargas de trabajo y la presión del entorno social laboral.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we show how to accurately perform a quasi-a priori estimation of the truncation error of steady-state solutions computed by a discontinuous Galerkin spectral element method. We estimate the spatial truncation error using the ?-estimation procedure. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, we use non time-converged solutions on one grid with different polynomial orders. The quasi-a priori approach estimates the error while the residual of the time-iterative method is not negligible. Furthermore, the method permits one to decouple the surface and the volume contributions of the truncation error, and provides information about the anisotropy of the solution as well as its rate of convergence in polynomial order. First, we focus on the analysis of one dimensional scalar conservation laws to examine the accuracy of the estimate. Then, we extend the analysis to two dimensional problems. We demonstrate that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bahadur representation and its applications have attracted a large number of publications and presentations on a wide variety of problems. Mixing dependency is weak enough to describe the dependent structure of random variables, including observations in time series and longitudinal studies. This note proves the Bahadur representation of sample quantiles for strongly mixing random variables (including ½-mixing and Á-mixing) under very weak mixing coe±cients. As application, the asymptotic normality is derived. These results greatly improves those recently reported in literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La presente investigación tiene sus orígenes desde el siglo XV, cuando se presumía que la auditoría fue incentivada por la necesidad del hombre de llevar a cabo actividades de observación, investigación, comprobación y verificación de la información financiera generada por las empresas; para ser más concretos nació en el seno de algunas familias pudientes de Inglaterra, otorgando a la palabra “Auditor” el significado de “Persona que Oye”. Desde hace ya algunos años, el ejercicio de los contadores públicos es vital en el proceso de auditorías de estados financieros, lo cual conlleva a la necesidad de estructurar a profundidad aspectos relevantes para el desarrollo de las auditorias, entre los cuales el riesgo de error material es fundamental, no solo por la importancia del tema, sino además porque son ellos quienes expresan una opinión acerca de la razonabilidad de las cifras de la información financiera sujeta a revisión y estudio. Y es ahí donde surge el IAASB (International Auditing and Assurance Standards Board), que es el ente encargado de emitir normas y lineamientos de auditoría y atestiguamiento para uso de todos los profesionales, en virtud de un proceso de auditoría de establecimiento de normas, el cual proporciona información de interés público sobre el desarrollo de auditorías, incluyendo en este como identificar riesgos de error material mediante el entendimiento de la entidad y su entorno. La investigación de esta problemática, surgió con la necesidad de los contadores de herramientas de apoyo para ejercer la profesión, que se apegan a la normativa técnica contable de nuestro país; por lo que objetivo principal que persigue esta investigación es diseñar cuestionarios de control interno para la identificación de riesgos de error material en auditorías externas de Estados Financieros, aplicadas por auditores independientes del municipio de San Salvador. Durante la investigación de campo, se notó que la mayoría de los auditores utiliza diferentes técnicas para identificar riesgos de error material, lo que da la pauta que es indispensable contar con una herramienta de apoyo, que incorpore el entendimiento de la entidad y su entorno, así como el control interno, además de la aplicación de las políticas para el logro de sus objetivos, estos aspectos ayudan a detectar los fraudes y errores, no olvidando que las investigaciones con la administración son de mucha utilidad. Uno de los obstáculos en la entrevista fue la falta de recopilación de información sobre cuáles son las técnicas que utilizan estos auditores para identificar riesgos. Finalmente, se concluye que los conocimientos técnicos sobre el riesgo de negocio son primordiales para los auditores, que el entendimiento de riesgos de errores materiales es fundamental al momento de ejecutar auditorías externas de estados financieros y que la evaluación del control interno y los riesgos del negocio son importantes para la identificación de errores materiales. Para lo cual se recomienda, establecer una planificación de auditoría con base al análisis de la entidad y de los posibles riesgos de negocio, mediante el análisis del entorno. Es indispensable que los profesionales que ejercen la auditoría, cuenten con una base técnica que les permita identificar estratégicamente aquellos riesgos de error material que afecten adversamente el logro de los objetivos que persigue la entidad, sin dejar atrás el estudio y evaluación del control interno de la entidad auditada, ya que se consideran puntos clave para identificar riesgos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalat’s lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total Δv and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mentre si svolgono operazioni su dei qubit, possono avvenire vari errori, modificando così l’informazione da essi contenuta. La Quantum Error Correction costruisce algoritmi che permettono di tollerare questi errori e proteggere l’informazione che si sta elaborando. Questa tesi si focalizza sui codici a 3 qubit, che possono correggere un errore di tipo bit-flip o un errore di tipo phase-flip. Più precisamente, all’interno di questi algoritmi, l’attenzione è posta sulla procedura di encoding, che punta a proteggere meglio dagli errori l’informazione contenuta da un qubit, e la syndrome measurement, che specifica su quale qubit è avvenuto un errore senza alterare lo stato del sistema. Inoltre, sfruttando la procedura della syndrome measurement, è stata stimata la probabilità di errore di tipo bit-flip e phase-flip su un qubit attraverso l’utilizzo della IBM quantum experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reconstruction of the external ear to correct congenital deformities or repair following trauma remains a significant challenge in reconstructive surgery. Previously, we have developed a novel approach to create scaffold-free, tissue engineering elastic cartilage constructs directly from a small population of donor cells. Although the developed constructs appeared to adopt the structural appearance of native auricular cartilage, the constructs displayed limited expression and poor localization of elastin. In the present study, the effect of growth factor supplementation (insulin, IGF-1, or TGF-β1) was investigated to stimulate elastogenesis as well as to improve overall tissue formation. Using rabbit auricular chondrocytes, bioreactor-cultivated constructs supplemented with either insulin or IGF-1 displayed increased deposition of cartilaginous ECM, improved mechanical properties, and thicknesses comparable to native auricular cartilage after 4 weeks of growth. Similarly, growth factor supplementation resulted in increased expression and improved localization of elastin, primarily restricted within the cartilaginous region of the tissue construct. Additional studies were conducted to determine whether scaffold-free engineered auricular cartilage constructs could be developed in the 3D shape of the external ear. Isolated auricular chondrocytes were grown in rapid-prototyped tissue culture molds with additional insulin or IGF-1 supplementation during bioreactor cultivation. Using this approach, the developed tissue constructs were flexible and had a 3D shape in very good agreement to the culture mold (average error <400 µm). While scaffold-free, engineered auricular cartilage constructs can be created with both the appropriate tissue structure and 3D shape of the external ear, future studies will be aimed assessing potential changes in construct shape and properties after subcutaneous implantation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.