934 resultados para Refusal to Treat
Resumo:
In this paper, an unstructured Chimera mesh method is used to compute incompressible flow around a rotating body. To implement the pressure correction algorithm on unstructured overlapping sub-grids, a novel interpolation scheme for pressure correction is proposed. This indirect interpolation scheme can ensure a tight coupling of pressure between sub-domains. A moving-mesh finite volume approach is used to treat the rotating sub-domain and the governing equations are formulated in an inertial reference frame. Since the mesh that surrounds the rotating body undergoes only solid body rotation and the background mesh remains stationary, no mesh deformation is encountered in the computation. As a benefit from the utilization of an inertial frame, tensorial transformation for velocity is not needed. Three numerical simulations are successfully performed. They include flow over a fixed circular cylinder, flow over a rotating circular cylinder and flow over a rotating elliptic cylinder. These numerical examples demonstrate the capability of the current scheme in handling moving boundaries. The numerical results are in good agreement with experimental and computational data in literature. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Resumen: El artículo aborda el tema del estatus jurídico del ser humano en estado embrionario en el proyecto de nuevo Código Civil y Comercial presentado en 2012. Se parte de la cuestión de que el orden jurídico no crea la personalidad, sino que la reconoce en el ser humano por el solo hecho de serlo y se analiza el Código Civil de Vélez Sársfield en sus diversas interpretaciones. Se concluye que a la luz de la Constitución Nacional y el resto del derecho positivo vigente el ser humano en estado embrionario o fetal, desde el momento mismo de su concepción, goza de todos los derechos reconocidos y garantizados por el orden jurídico. En ese marco, se analiza el Artículo 19 del proyecto y se observa que realiza una arbitraria e injusta discriminación al establecer dos momentos para el comienzo de la existencia de la persona. Además, la redacción deja en una indefinición jurídica al ser humano fecundado y concebido extrauterinamente antes de su implantación. También se analiza el tema en relación con la propuesta de articulado referido al cuerpo humano (Art. 17). Se considera el peligro de cosificación y atentado contra la dignidad humana que encierra la propuesta.
Resumo:
Resumen: El cognitive enhancement, o el uso de la medicina y la tecnología para lograr el potenciamiento cognitivo sin fines terapéuticos, es una de las temáticas de las cuales se interesa la neurobioética, es decir, la rama de la neuroética que estudia el actuar neurocientífico desde el punto de vista de la bioética tradicional. Este potenciamiento podría llevarse a cabo mediante el uso offlevel de fármacos que están en uso para tratar patologías, mediante la estimulación cerebral externa o mediante implantes cerebrales. El uso de estas terapias, muy difundidas entre los estudiantes y profesores de las grandes universidades, ha suscitado un gran interés académico especialmente en el norte de Europa y Estado Unidos y entre los posthumanistas que las consideran fundamentales para lograr el avance en la escala evolutiva.
Resumo:
[ES]La fibrilación ventricular (VF) es el primer ritmo registrado en el 40\,\% de las muertes súbitas por paro cardiorrespiratorio extrahospitalario (PCRE). El único tratamiento eficaz para la FV es la desfibrilación mediante una descarga eléctrica. Fuera del hospital, la descarga se administra mediante un desfibrilador externo automático (DEA), que previamente analiza el electrocardiograma (ECG) del paciente y comprueba si presenta un ritmo desfibrilable. La supervivencia en un caso de PCRE depende fundamentalmente de dos factores: la desfibrilación temprana y la resucitación cardiopulmonar (RCP) temprana, que prolonga la FV y por lo tanto la oportunidad de desfibrilación. Para un correcto análisis del ritmo cardiaco es necesario interrumpir la RCP, ya que, debido a las compresiones torácicas, la RCP introduce artefactos en el ECG. Desafortunadamente, la interrupción de la RCP afecta negativamente al éxito en la desfibrilación. En 2003 se aprobó el uso del DEA en pacientes entre 1 y 8 años. Los DEA, que originalmente se diseñaron para pacientes adultos, deben discriminar de forma precisa las arritmias pediátricas para que su uso en niños sea seguro. Varios DEAs se han adaptado para uso pediátrico, bien demostrando la precisión de los algoritmos para adultos con arritmias pediátricas, o bien mediante algoritmos específicos para arritmias pediátricas. Esta tesis presenta un nuevo algoritmo DEA diseñado conjuntamente para pacientes adultos y pediátricos. El algoritmo se ha probado exhaustivamente en bases de datos acordes a los requisitos de la American Heart Association (AHA), y en registros de resucitación con y sin artefacto RCP. El trabajo comenzó con una larga fase experimental en la que se recopilaron y clasificaron retrospectivamente un total de 1090 ritmos pediátricos. Además, se revisó una base de arritmias de adultos y se añadieron 928 nuevos ritmos de adultos. La base de datos final contiene 2782 registros, 1270 se usaron para diseñar el algoritmo y 1512 para validarlo. A continuación, se diseñó un nuevo algoritmo DEA compuesto de cuatro subalgoritmos. Estos subalgoritmos están basados en un conjunto de nuevos parámetros para la detección de arritmias, calculados en diversos dominios de la señal, como el tiempo, la frecuencia, la pendiente o la función de autocorrelación. El algoritmo cumple las exigencias de la AHA para la detección de ritmos desfibrilables y no-desfibrilables tanto en pacientes adultos como en pediátricos. El trabajo concluyó con el análisis del comportamiento del algoritmo con episodios reales de resucitación. En los ritmos que no contenían artefacto RCP se cumplieron las exigencias de la AHA. Posteriormente, se estudió la precisión del algoritmo durante las compresiones torácicas, antes y después de filtrar el artefacto RCP. Para suprimir el artefacto se utilizó un nuevo método desarrollado a lo largo de la tesis. Los ritmos desfibrilables se detectaron de forma precisa tras el filtrado, los no-desfibrilables sin embargo no.
Resumo:
As defined, the modeling procedure is quite broad. For example, the chosen compartments may contain a single organism, a population of organisms, or an ensemble of populations. A population compartment, in turn, could be homogeneous or possess structure in size or age. Likewise, the mathematical statements may be deterministic or probabilistic in nature, linear or nonlinear, autonomous or able to possess memory. Examples of all types appear in the literature. In practice, however, ecosystem modelers have focused upon particular types of model constructions. Most analyses seem to treat compartments which are nonsegregated (populations or trophic levels) and homogeneous. The accompanying mathematics is, for the most part, deterministic and autonomous. Despite the enormous effort which has gone into such ecosystem modeling, there remains a paucity of models which meets the rigorous &! validation criteria which might be applied to a model of a mechanical system. Most ecosystem models are short on prediction ability. Even some classical examples, such as the Lotka-Volterra predator-prey scheme, have not spawned validated examples.
Resumo:
This paper reviews firstly methods for treating low speed rarefied gas flows: the linearised Boltzmann equation, the Lattice Boltzmann method (LBM), the Navier-Stokes equation plus slip boundary conditions and the DSMC method, and discusses the difficulties in simulating low speed transitional MEMS flows, especially the internal flows. In particular, the present version of the LBM is shown unfeasible for simulation of MEMS flow in transitional regime. The information preservation (IP) method overcomes the difficulty of the statistical simulation caused by the small information to noise ratio for low speed flows by preserving the average information of the enormous number of molecules a simulated molecule represents. A kind of validation of the method is given in this paper. The specificities of the internal flows in MEMS, i.e. the low speed and the large length to width ratio, result in the problem of elliptic nature of the necessity to regulate the inlet and outlet boundary conditions that influence each other. Through the example of the IP calculation of the microchannel (thousands long) flow it is shown that the adoption of the conservative scheme of the mass conservation equation and the super relaxation method resolves this problem successfully. With employment of the same measures the IP method solves the thin film air bearing problem in transitional regime for authentic hard disc write/read head length ( ) and provides pressure distribution in full agreement with the generalized Reynolds equation, while before this the DSMC check of the validity of the Reynolds equation was done only for short ( ) drive head. The author suggests degenerate the Reynolds equation to solve the microchannel flow problem in transitional regime, thus provides a means with merit of strict kinetic theory for testing various methods intending to treat the internal MEMS flows.
Resumo:
This paper reviews firstly methods for treating low speed rarefied gas flows: the linearised Boltzmann equation, the Lattice Boltzmann method (LBM), the Navier-Stokes equation plus slip boundary conditions and the DSMC method, and discusses the difficulties in simulating low speed transitional MEMS flows, especially the internal flows. In particular, the present version of the LBM is shown unfeasible for simulation of MEMS flow in transitional regime. The information preservation (IP) method overcomes the difficulty of the statistical simulation caused by the small information to noise ratio for low speed flows by preserving the average information of the enormous number of molecules a simulated molecule represents. A kind of validation of the method is given in this paper. The specificities of the internal flows in MEMS, i.e. the low speed and the large length to width ratio, result in the problem of elliptic nature of the necessity to regulate the inlet and outlet boundary conditions that influence each other. Through the example of the IP calculation of the microchannel (thousands m ? long) flow it is shown that the adoption of the conservative scheme of the mass conservation equation and the super relaxation method resolves this problem successfully. With employment of the same measures the IP method solves the thin film air bearing problem in transitional regime for authentic hard disc write/read head length ( 1000 L m ? = ) and provides pressure distribution in full agreement with the generalized Reynolds equation, while before this the DSMC check of the validity of the Reynolds equation was done only for short ( 5 L m ? = ) drive head. The author suggests degenerate the Reynolds equation to solve the microchannel flow problem in transitional regime, thus provides a means with merit of strict kinetic theory for testing various methods intending to treat the internal MEMS flows.
Resumo:
In this paper, the transition of a detonation from deflagration was investigated numerically while a detonation wave propagates in a tube with a sudden change in cross section, referred to as the expansion cavity. The dispersion-controlled scheme was adopted to solve Euler equations of axis-symmetric flows implemented with detailed chemical reaction kinetics of hydrogen-oxygen (or hydrogen-air) mixture. The fractional step method was applied to treat the stiff problems of chemical reaction flow. It is observed that phenomena of detonation quenching and reigniting appear when the planar detonation front diffracts at the vertex of the expansion cavity entrance. Numerical results show that detonation front in mixture of higher sensitivity keeps its substantial coupled structure when it propagates into the expansion cavity. However, the leading shock wave decouples with the combustion zone if mixture of lower sensitivity was set as the initial gas.
Resumo:
227 págs.
Resumo:
A mathematical model to optimize the German fishing fleet is draftet and it’s data basis is described. The model has been developed by Brodersen, Campbell and Hanf in 1994 to 1998. It could be shown, that this model is flexible enough to be applied successfully to a lot of very different political questions, if adapted accordingly. The economic consequences of measures of fishery politics, the effects of technical advances, but also increasing incertainties can, to some degree, appropriately be assessed quantitatively. Finally it could be shown that, principally, the available account of data is a good basis for investigations into fishery economics and fishery politics. However there is a need to treat the source of data continuously and competently in order to make these informations available quickly. Statistical data to reflect the fishery sector are valuable. However, they obtain their full value only when judged by experts from the fishing industry, biology and technical fishery research.
Resumo:
A quadtree-based adaptive Cartesian grid generator and flow solver were developed. The grid adaptation based on pressure or density gradient was performed and a gridless method based on the least-square fashion was used to treat the wall surface boundary condition, which is generally difficult to be handled for the common Cartesian grid. First, to validate the technique of grid adaptation, the benchmarks over a forward-facing step and double Mach reflection were computed. Second, the flows over the NACA 0012 airfoil and a two-element airfoil were calculated to validate the developed gridless method. The computational results indicate the developed method is reasonable for complex flows.
Resumo:
In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.
Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
完全电离等离子体中,当试探粒子分布函数fα是关于试探粒子速度vα的偶函数时,导出了一个新的动力学方程的碰撞算子.该碰撞算子同时包括了大角散射(库仑近碰撞)和小角散射(库仑远碰撞)的二体碰撞的贡献,因此,该碰撞算子同时适用于弱耦合(库仑对数ln∧≥10)和中等耦合(库仑对数2≤ln∧≤10)等离子体.而且经过修改的碰撞算子和Rosenbluth势有直接的联系,当试探粒子和场粒子满足条件mα<mβ(如电子-离子碰撞或Lorentz气体模型)和|vα|〉|vβ|时,经约化的电子-离子碰撞算子同最初的Fokker