942 resultados para Refined Perfect Diagonalization Procedure
Resumo:
The Data Envelopment Analysis (DEA) efficiency score obtained for an individual firm is a point estimate without any confidence interval around it. In recent years, researchers have resorted to bootstrapping in order to generate empirical distributions of efficiency scores. This procedure assumes that all firms have the same probability of getting an efficiency score from any specified interval within the [0,1] range. We propose a bootstrap procedure that empirically generates the conditional distribution of efficiency for each individual firm given systematic factors that influence its efficiency. Instead of resampling directly from the pooled DEA scores, we first regress these scores on a set of explanatory variables not included at the DEA stage and bootstrap the residuals from this regression. These pseudo-efficiency scores incorporate the systematic effects of unit-specific factors along with the contribution of the randomly drawn residual. Data from the U.S. airline industry are utilized in an empirical application.
Resumo:
Herbicides are used to control the growth of weeds along highways, power lines, and many other urban locations. Exposure to herbicides has been linked to adverse health outcomes. This study was initiated to pretest for the presence of herbicides in multiple water sources near intersections in a corridor in the Northwest Harris County (specifically in the Highway 6/FM 1960, North Freeway 45, US 290 and S 99 corridor). Roadside water and tap water samples were collected and analyzed for herbicides using the established Environmental Protection Agency (EPA) Method 515.4: "Determination of Chlorinated Acids in Drinking Water by Liquid-Liquid Micro-extraction, Derivatization, and Fast Gas Chromatography with Electron Capture Detection." A standard operating procedure (adapted from the US EPA Method 515.4) was developed for subsequent, larger studies of environmental fate of herbicides and non-occupational exposure risks. Preliminary testing of 16 water samples was performed to pretest the existence of trace herbicides; all concentrations that were greater than the minimum reporting limits of each analyte are reported with a 99 percent confidence. This study failed to find concentrations above the limits of detection of the method in any of the samples collected on June 15, 2008. However, this does not indicate that the waters around the NW Harris County are free of herbicides and metabolites. A larger and repeated sampling in the region would be necessary to make that claim. ^
Resumo:
Although the processes involved in rational patient targeting may be obvious for certain services, for others, both the appropriate sub-populations to receive services and the procedures to be used for their identification may be unclear. This project was designed to address several research questions which arise in the attempt to deliver appropriate services to specific populations. The related difficulties are particularly evident for those interventions about which findings regarding effectiveness are conflicting. When an intervention clearly is not beneficial (or is dangerous) to a large, diverse population, consensus regarding withholding the intervention from dissemination can easily be reached. When findings are ambiguous, however, conclusions may be impossible.^ When characteristics of patients likely to benefit from an intervention are not obvious, and when the intervention is not significantly invasive or dangerous, the strategy proposed herein may be used to identify specific characteristics of sub-populations which may benefit from the intervention. The identification of these populations may be used both in further informing decisions regarding distribution of the intervention and for purposes of planning implementation of the intervention by identifying specific target populations for service delivery.^ This project explores a method for identifying such sub-populations through the use of related datasets generated from clinical trials conducted to test the effectiveness of an intervention. The method is specified in detail and tested using the example intervention of case management for outpatient treatment of populations with chronic mental illness. These analyses were applied in order to identify any characteristics which distinguish specific sub-populations who are more likely to benefit from case management service, despite conflicting findings regarding its effectiveness for the aggregate population, as reported in the body of related research. However, in addition to a limited set of characteristics associated with benefit, the findings generated, a larger set of characteristics of patients likely to experience greater improvement without intervention. ^
Resumo:
While reported prevalence rates of troubled employees vary considerably, even conservative estimates indicate a major public health problem. For example, alcohol and drug related problems alone cost U.S. industry more than 45 billion dollars annually.^ Of the alternatives available to deal with these problems, e.g., dismissal or disciplinary actions, the most viable and cost effective are employee assistance programs (EAP), designed to provide professional assistance to employees experiencing alcohol, drug, emotional or personal crisis.^ The principal component of an EAP is that of assessment and referral, and this study was developed to determine which EAP client intake variables are the most efficacious predictors of assessment and referral procedures.^ Although, specific client intake variables were statistically significant the discriminant classification analysis was demonstrably inadequate. Nevertheless, the identification of A/R procedure phases which were not efficacious, as well as EAP client populations for whom services were not effective, were extremely valuable discernments. Identifying the less efficacious components of the A/R process provided an opportunity to recommend alternatives to current program procedures and practices, which may ameliorate program effectiveness. ^
Resumo:
Background: Poor communication among health care providers is cited as the most common cause of sentinel events involving patients. Sign-out of patient data at the change of clinician shifts is a component of communication that is especially vulnerable to errors. Sign-outs are particularly extensive and complex in intensive care units (ICUs). There is a paucity of validated tools to assess ICU sign-outs. ^ Objective: To design a valid and reliable survey tool to assess the perceptions of Pediatric ICU (PICU) clinicians about sign-out. ^ Design: Cross-sectional, web-based survey ^ Setting: Academic hospital, 31-bed PICU ^ Subjects: Attending faculty, fellows, nurse practitioners and physician assistants. ^ Interventions: A survey was designed with input from a focus group and administered to PICU clinicians. Test-retest reliability, internal consistency and validity of the survey tool were assessed. ^ Measurements and Main Results: Forty-eight PICU clinicians agreed to participate. We had 42(88%) and 40(83%) responses in the test and retest phases. The mean scores for the ten survey items ranged from 2.79 to 3.67 on a five point Likert scale with no significant test-retest difference and a Pearson correlation between pre and post answers of 0.65. The survey item scores showed internal consistency with a Cronbach's Alpha of 0.85. Exploratory factor analysis revealed three constructs: efficacy of sign-out process, recipient satisfaction and content applicability. Seventy eight % clinicians affirmed the need for improvement of the sign-out process and 83% confirmed the need for face- to-face verbal sign-out. A system-based sign-out format was favored by fellows and advanced level practitioners while attendings preferred a problem-based format (p=0.003). ^ Conclusions: We developed a valid and reliable survey to assess clinician perceptions about the ICU sign-out process. These results can be used to design a verbal template to improve and standardize the sign-out process.^
Resumo:
Fil: Malbrán, María del Carmen. Universidad Nacional de La Plata. Facultad de Humanidades y Ciencias de la Educación; Argentina.
Resumo:
Fil: Malbrán, María del Carmen. Universidad Nacional de La Plata. Facultad de Humanidades y Ciencias de la Educación; Argentina.
Resumo:
Many of the material models most frequently used for the numerical simulation of the behavior of concrete when subjected to high strain rates have been originally developed for the simulation of ballistic impact. Therefore, they are plasticity-based models in which the compressive behavior is modeled in a complex way, while their tensile failure criterion is of a rather simpler nature. As concrete elements usually fail in tensión when subjected to blast loading, available concrete material models for high strain rates may not represent accurately their real behavior. In this research work an experimental program of reinforced concrete fíat elements subjected to blast load is presented. Altogether four detonation tests are conducted, in which 12 slabs of two different concrete types are subjected to the same blast load. The results of the experimental program are then used for the development and adjustment of numerical tools needed in the modeling of concrete elements subjected to blast.
Resumo:
El proyecto, “Aplicaciones de filtrado adaptativo LMS para mejorar la respuesta de acelerómetros”, se realizó con el objetivo de eliminar señales no deseadas de la señal de información procedentes de los acelerómetros para aplicaciones automovilísticas, mediante los algoritmos de los filtros adaptativos LMS. Dicho proyecto, está comprendido en tres áreas para su realización y ejecución, los cuales fueron ejecutados desde el inicio hasta el último día de trabajo. En la primera área de aplicación, diseñamos filtros paso bajo, paso alto, paso banda y paso banda eliminada, en lo que son los filtros de butterworth, filtros Chebyshev, de tipo uno como de tipo dos y filtros elípticos. Con esta primera parte, lo que se quiere es conocer, o en nuestro caso, recordar el entorno de Matlab, en sus distintas ecuaciones prediseñadas que nos ofrece el mencionado entorno, como también nos permite conocer un poco las características de estos filtros. Para posteriormente probar dichos filtros en el DSP. En la segunda etapa, y tras recordar un poco el entorno de Matlab, nos centramos en la elaboración y/o diseño de nuestro filtro adaptativo LMS; experimentado primero con Matlab, para como ya se dijo, entender y comprender el comportamiento del mismo. Cuando ya teníamos claro esta parte, procedimos a “cargar” el código en el DSP, compilarlo y depurarlo, realizando estas últimas acciones gracias al Visual DSP. Resaltaremos que durante esta segunda etapa se empezó a excitar las entradas del sistema, con señales provenientes del Cool Edit Pro, y además para saber cómo se comportaba el filtro adaptativo LMS, se utilizó señales provenientes de un generador de funciones, para obtener de esta manera un desfase entre las dos señales de entrada; aunque también se utilizó el propio Cool Edit Pro para obtener señales desfasadas, pero debido que la fase tres no podíamos usar el mencionado software, realizamos pruebas con el generador de funciones. Finalmente, en la tercera etapa, y tras comprobar el funcionamiento deseado de nuestro filtro adaptativo DSP con señales de entrada simuladas, pasamos a un laboratorio, en donde se utilizó señales provenientes del acelerómetro 4000A, y por supuesto, del generador de funciones; el cual sirvió para la formación de nuestra señal de referencia, que permitirá la eliminación de una de las frecuencias que se emitirá del acelerómetro. Por último, cabe resaltar que pudimos obtener un comportamiento del filtro adaptativo LMS adecuado, y como se esperaba. Realizamos pruebas, con señales de entrada desfasadas, y obtuvimos curiosas respuestas a la salida del sistema, como son que la frecuencia a eliminar, mientras más desfasado estén estas señales, mas se notaba. Solucionando este punto al aumentar el orden del filtro. Finalmente podemos concluir que pese a que los filtros digitales probados en la primera etapa son útiles, para tener una respuesta lo más ideal posible hay que tener en cuenta el orden del filtro, el cual debe ser muy alto para que las frecuencias próximas a la frecuencia de corte, no se atenúen. En cambio, en los filtros adaptativos LMS, si queremos por ejemplo, eliminar una señal de entre tres señales, sólo basta con introducir la frecuencia a eliminar, por una de las entradas del filtro, en concreto la señal de referencia. De esta manera, podemos eliminar una señal de entre estas tres, de manera que las otras dos, no se vean afectadas por el procedimiento. Abstract The project, "LMS adaptive filtering applications to improve the response of accelerometers" was conducted in order to remove unwanted signals from the information signal from the accelerometers for automotive applications using algorithms LMS adaptive filters. The project is comprised of three areas for implementation and execution, which were executed from the beginning until the last day. In the first area of application, we design low pass filters, high pass, band pass and band-stop, as the filters are Butterworth, Chebyshev filters, type one and type two and elliptic filters. In this first part, what we want is to know, or in our case, remember the Matlab environment, art in its various equations offered by the mentioned environment, as well as allows us to understand some of the characteristics of these filters. To further test these filters in the DSP. In the second stage, and recalling some Matlab environment, we focus on the development and design of our LMS adaptive filter; experimented first with Matlab, for as noted above, understand the behavior of the same. When it was clear this part, proceeded to "load" the code in the DSP, compile and debug, making these latest actions by the Visual DSP. Will highlight that during this second stage began to excite the system inputs, with signals from the Cool Edit Pro, and also for how he behaved the LMS adaptive filter was used signals from a function generator, to thereby obtain a gap between the two input signals, but also used Cool Edit Pro himself for phase signals, but due to phase three could not use such software, we test the function generator. Finally, in the third stage, and after checking the desired performance of our DSP adaptive filter with simulated input signals, we went to a laboratory, where we used signals from the accelerometer 4000A, and of course, the function generator, which was used for the formation of our reference signal, enabling the elimination of one of the frequencies to be emitted from the accelerometer. Note that they were able to obtain a behavior of the LMS adaptive filter suitable as expected. We test with outdated input signals, and got curious response to the output of the system, such as the frequency to remove, the more outdated are these signs, but noticeable. Solving this point with increasing the filter order. We can conclude that although proven digital filters in the first stage are useful, to have a perfect answer as possible must be taken into account the order of the filter, which should be very high for frequencies near the frequency cutting, not weakened. In contrast, in the LMS adaptive filters if we for example, remove a signal from among three signals, only enough to eliminate the frequency input on one of the inputs of the filter, namely the reference signal. Thus, we can remove a signal between these three, so that the other two, not affected by the procedure.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.
Resumo:
The perfect drain for the Maxwell fish eye (MFE) is a non-magnetic dissipative region placed in the focal point to absorb all the incident radiation without reflection or scattering.