770 resultados para Fluid dynamics -- Study and teaching (Higher)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Epstein-Barr virus (EBV) is associated with several types of cancers including Hodgkin's lymphoma (HL) and nasopharyngeal carcinoma (NPC). EBV-encoded latent membrane protein 1 (LMP1), a multifunctional oncoprotein, is a powerful activator of the transcription factor NF-κB, a property that is essential for EBV-transformed lymphoblastoid cell survival. Previous studies reported LMP1 sequence variations and induction of higher NF-κB activation levels compared to the prototype B95-8 LMP1 by some variants. Here we used biopsies of EBV-associated cancers and blood of individuals included in the Swiss HIV Cohort Study (SHCS) to analyze LMP1 genetic diversity and impact of sequence variations on LMP1-mediated NF-κB activation potential. We found that a number of variants mediate higher NF-κB activation levels when compared to B95-8 LMP1 and mapped three single polymorphisms responsible for this phenotype: F106Y, I124V and F144I. F106Y was present in all LMP1 isolated in this study and its effect was variant dependent, suggesting that it was modulated by other polymorphisms. The two polymorphisms I124V and F144I were present in distinct phylogenetic groups and were linked with other specific polymorphisms nearby, I152L and D150A/L151I, respectively. The two sets of polymorphisms, I124V/I152L and F144I/D150A/L151I, which were markers of increased NF-κB activation in vitro, were not associated with EBV-associated HL in the SHCS. Taken together these results highlighted the importance of single polymorphisms for the modulation of LMP1 signaling activity and demonstrated that several groups of LMP1 variants, through distinct mutational paths, mediated enhanced NF-κB activation levels compared to B95-8 LMP1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Thoracoscopic sympathetic surgery is nowadays a broadly accepted technique in the treatment of primary hyperhidrosis as well as facial blushing. The objective of this study was to compare the two currently most commonly used methods for thoracic sympathicotomy: transection (ETS) and clipping (ETC.). METHODS This is a retrospective study on a total of 63 patients, who underwent rib-oriented sympathicotomy, either by transection (n = 36, 57 %) or by clipping (n = 27, 43 %). Moreover, the up-to-date international literature is reviewed concerning which level(s) of the sympathetic trunk should be addressed, depending on the patients underlying condition. Furthermore, the highly controversial topic of reversibility of sympathetic clipping is debated. RESULTS Our results confirm that clipping is at least as effective as transection of the sympathetic chain in the treatment of hyperhidrosis and facial blushing. Furthermore, the analysis of all larger studies on unclipping in humans shows a surprisingly high reported reversal rate between 48 and 77 %. CONCLUSIONS Depending on the symptoms of the patient, different levels of the sympathetic chain should be addressed. When a higher rib level such as R2 is approached, which more likely will result in moderate to severe compensatory sweating, clipping should be preferred as it seems that this technique has indeed a potential for reversibility. As demonstrated, this method is at least as effective as an irreversible transection of the sympathetic chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cathodoluminescence (CL) studies have previously shown that some secondary fluid inclusions in luminescent quartz are surrounded by dark, non-luminescent patches, resulting from fracture-sealing by late, trace-element-poor quartz. This finding has led to the tacit generalization that all dark CL patches indicate influx of low temperature, late-stage fluids. In this study we have examined natural and synthetic hydrothermal quartz crystals using CL imaging supplemented by in-situ elemental analysis. The results lead us to propose that all natural, liquid-water-bearing inclusions in quartz, whether trapped on former crystal growth surfaces (i.e., of primary origin) or in healed fractures (i.e., of pseudosecondary or secondary origin), are surrounded by three-dimensional, non-luminescent patches. Cross-cutting relations show that the patches form after entrapment of the fluid inclusions and therefore they are not diagnostic of the timing of fluid entrapment. Instead, the dark patches reveal the mechanism by which fluid inclusions spontaneously approach morphological equilibrium and purify their host quartz over geological time. Fluid inclusions that contain solvent water perpetually dissolve and reprecipitate their walls, gradually adopting low-energy euhedral and equant shapes. Defects in the host quartz constitute solubility gradients that drive physical migration of the inclusions over distances of tens of μm (commonly) up to several mm (rarely). Inclusions thus sequester from their walls any trace elements (e.g., Li, Al, Na, Ti) present in excess of equilibrium concentrations, thereby chemically purifying their host crystals in a process analogous to industrial zone refining. Non-luminescent patches of quartz are left in their wake. Fluid inclusions that contain no liquid water as solvent (e.g., inclusions of low-density H2O vapor or other non-aqueous volatiles) do not undergo this process and therefore do not migrate, do not modify their shapes with time, and are not associated with dark-CL zone-refined patches. This new understanding has implications for the interpretation of solids within fluid inclusions (e.g., Ti- and Al-minerals) and for the elemental analysis of hydrothermal and metamorphic quartz and its fluid inclusions by microbeam methods such as LA-ICPMS and SIMS. As Ti is a common trace element in quartz, its sequestration by fluid inclusions and its depletion in zone-refined patches impacts on applications of the Ti-in-quartz geothermometer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study adapted the current model of science undergraduate research experiences (URE's) and applied this novel modification to include community college students. Numerous researchers have examined the efficacy of URE's in improving undergraduate retention and graduation rates, as well as matriculation rates for graduate programs. However, none have detailed the experience for community college students, and few have employed qualitative methodologies to gather relevant descriptive data from URE participants. This study included perspectives elicited from both non-traditional student participants and the established laboratory community. The purpose of this study was to determine the effectiveness of the traditional model for a non-traditional student population. The research effort described here utilized a qualitative design and an explanatory case study methodology. Six non-traditional students from the Maine Community College System participated in this study. Student participants were placed in six academic research laboratories located throughout the state. Student participants were interviewed three times during their ten-week internship and asked to record their personal reflections in electronic format. Participants from the established research community were also interviewed. These included both faculty mentors and other student laboratory personnel. Ongoing comparative analysis of the textual data revealed that laboratory organizational structure and social climate significantly influence acculturation outcomes for non-traditional URE participants. Student participants experienced a range of acculturation outcomes from full integration to marginalization. URE acculturation outcomes influenced development of non-traditional students? professional and academic self-concepts. Positive changes in students? self-concepts resulted in greater commitment to individual professional goals and academic aspirations. The findings from this study suggest that traditional science URE models can be successfully adapted to meet the unique needs of a non-traditional student population – community college students. These interpretations may encourage post-secondary educators, administrators, and policy makers to consider expanded access and support for non-traditional students seeking science URE opportunities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to investigate a selection of children's historical nonfiction literature for evidence of coherence. Although research has been conducted on coherence of textbook material and its influences on comprehension there has been limited study on coherence in children's nonfiction literature. Generally, textual coherence has been seen as critical in the comprehensibility of content area textbooks because it concerns the unity of connections among ideas and information. Disciplinary coherence concerns the extent to which authors of historical text show readers how historians think and write. Since young readers are apprentices in learning historical content and conventions of historical thinking, evidence of disciplinary coherence is significant in nonfiction literature for young readers. The sample of the study contained 32 books published between 1989 and 2000 ranging in length from less than 90 pages to more than 150 pages. Content analysis was the quantitative research technique used to measure 84 variables of textual and disciplinary coherence in three passages of each book, as proportions of the total number of words for each book. Reliability analyses and an examination of 750 correlations showed the extent to which variables were related in the books. Three important findings emerged from the study that should be considered in the selection and use of children's historical nonfiction literature in classrooms. First, characteristics of coherence are significantly related together in high quality nonfiction literature. Second, shorter books have a higher proportion of textual coherence than longer books as measured in three passages. Third, presence of the author is related to characteristics of coherence throughout the books. The findings show that nonfiction literature offers students content that researchers have found textbooks lack. Both younger and older students have the opportunity to learn the conventions of historical thinking as they learn content through nonfiction literature. Further, the children's literature, represented in the Orbis Pictus list, shows students that authors select, interpret, and question information, and give other interpretations. The implications of the study for teaching history, teacher preparation in content and literacy, school practices, children's librarians, and publishers of children's nonfiction are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations. As a results of this, Computational Fluid Dynamic (CFD) solvers are widely used in the aeronautical field. These solvers require the correct selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on the proper choice of these parameters. In this paper we create an expert system capable of making an accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver. Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time required for the convergence of a CFD solver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Area, launched in 1999 with the Bologna Declaration, has bestowed such a magnitude and unprecedented agility to the transformation process undertaken by European universities. However, the change has been more profound and drastic with regards to the use of new technologies both inside and outside the classroom. This article focuses on the study and analysis of the technology’s history within the university education and its impact on teachers, students and teaching methods. All the elements that have been significant and innovative throughout the history inside the teaching process have been analyzed, from the use of blackboard and chalk during lectures, the use of slide projectors and transparent slides, to the use of electronic whiteboards and Internet nowadays. The study is complemented with two types of surveys that have been performed among teachers and students during the school years 1999 - 2011 in the School of Civil Engineering at the Polytechnic University of Madrid. The pros and cons of each of the techniques and methodologies used in the learning process over the last decades are described, unfolding how they have affected the teacher, who has evolved from writing on a whiteboard to project onto a screen, the student, who has evolved from taking handwritten notes to download information or search the Internet, and the educational process, that has evolved from the lecture to acollaborative learning and project-based learning. It is unknown how the process of learning will evolve in the future, but we do know the consequences that some of the multimedia technologies are having on teachers, students and the learning process. It is our goal as teachers to keep ourselves up to date, in order to offer the student adequate technical content, while providing proper motivation through the use of new technologies. The study provides a forecast in the evolution of multimedia within the classroom and the renewal of the education process, which in our view, will set the basis for future learning process within the context of this new interactive era.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An elliptic computational fluid dynamics wake model based on the actuator disk concept is used to simulate a wind turbine, approximated by a disk upon which a distribution of forces, defined as axial momentum sources, is applied on an incoming non-uniform shear flow. The rotor is supposed to be uniformly loaded with the exerted forces estimated as a function of the incident wind speed, thrust coefficient and rotor diameter. The model is assessed in terms of wind speed deficit and added turbulence intensity for different turbulence models and is validated from experimental measurements of the Sexbierum wind turbine experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fluid-dynamics of the corona ejected by laser-fusion targets in the direct-drive approach (thermal radiation and atomic physics unimportant) is discussed. A two-fluid model involves inverse bremsstrahlung absorption, refraction, different ion and electron temperatures with energy exchange, different ion and electron velocities and magnetic field generation, and their effect on ion-electron friction and heat flux. Four dimensionless parameters determine coronal regimes for one-dimensional flows under uniform irradiation. One additional parameter is involved in two-dimensional problems,including the stability of one-dimensional flows, and the smoothing of nonuniform driving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanisms of growth of a circular void by plastic deformation were studied by means of molecular dynamics in two dimensions (2D). While previous molecular dynamics (MD) simulations in three dimensions (3D) have been limited to small voids (up to ≈10 nm in radius), this strategy allows us to study the behavior of voids of up to 100 nm in radius. MD simulations showed that plastic deformation was triggered by the nucleation of dislocations at the atomic steps of the void surface in the whole range of void sizes studied. The yield stress, defined as stress necessary to nucleate stable dislocations, decreased with temperature, but the void growth rate was not very sensitive to this parameter. Simulations under uniaxial tension, uniaxial deformation and biaxial deformation showed that the void growth rate increased very rapidly with multiaxiality but it did not depend on the initial void radius. These results were compared with previous 3D MD and 2D dislocation dynamics simulations to establish a map of mechanisms and size effects for plastic void growth in crystalline solids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study has been made on the influence of the leading edge imperfections in airfoils used in different devices relating their aerodynamic performances. Wind tunnel tests have been made at different Reynolds numbers and angle of attacks in order to show this effect. Later, a quantitative study of the aerodynamic properties has been made based on the different leading edge imperfections and their size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Surface Renewal Theory (SRT) is one of the most unfamiliar models in order to characterize fluid-fluid and fluid-fluid-solid reactions, which are of considerable industrial and academicals importance. In the present work, an approach to the resolution of the SRT model by numerical methods is presented, enabling the visualization of the influence of different variables which control the heterogeneous overall process. Its use in a classroom allowed the students to reach a great understanding of the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some would argue that there is a need for the traditional lecture format to be rethought in favour of a more active approach. However, this must form part of a bipartite strategy, considered in conjunction with the layout of any new space to facilitate alternative learning and teaching methods. With this in mind, this paper begins to examine the impact of the learning environment on the student learning experience, specifically focusing on students studying on the Architectural Technology and Management programme at Ulster University. The aim of this study is two-fold: to increase understanding of the impact of learning space layout, by taking a student centered approach; and to gain an appreciation of how technology can impact upon the learning space. The study forms part of a wider project being undertaken at Ulster University known as the Learning Landscape Transition Project, exploring the relationship between learning, teaching and space layout. Data collection was both qualitative and quantitative, with use of a case study supported by a questionnaire based on attitudinal scaling. A focus group was also used to further analyse the key trends resulting from the questionnaire. The initial results suggest that the learning environment, and the technology within it, can not only play an important part in the overall learning experience of the student, but also assist with preparation for the working environment to be experienced in professional life.